After reading the original Red Hat blog article about v2 in RHEL 8, I decided to “save” the complete series on my “read later” surrogate list aka this blog post.
I still like to use the triple A acronym due to its simplicity (!). Not easy to forget. In the context of this post I will state that AAA = Authentication and Authorization. For the sake of accuracy I will also point you to the exact meaning as explained by Techopedia.
Nowadays with the cloud and as-a-service (aaS) paradigm, web2.0, web3.0 applications, APIs, services and/or microservices there comes the three musketeers (in AAA). Everybody heard of them, heck, some (if not all) webdevs are in a close friendship relation with these three musketeers.
Without further due, let’s introduce them: OpenID, OAuth and SAML. Well, yes we did hear about them but why are so important to worth writing this article? Even though they fight for the same cause, each one is doing it in its very own way and a lot of confusion happens. Here I share for myself and others some good resources to explore and help lift the confusion:
A series of four articles were recently published on IBM’s developerWorks portal by Ted Neward. I use this article as a rich bookmarks container pointing to the above.
Yet another sad story, maybe one of the reasons I prefer macOS (formerly OS X) as my primary OS. This post is more like a note to self. Hopefully things will change for the better one day. Today the “options” are quite limited:
Even though this is a mini article, I do anticipate some hits due to normal interest on the topic. Of course there is a 3rd option: just buy an external full featured BD player device.
Not always a success story. Since this is a work in progress kind of thing (I currently struggle with it), I will be very brief. I hit some issues (instability) on a brand new laptop using Debian derivate distro (both Ubuntu and Mint). I had a bit of comfort seeing that I’m not alone. The exact error and similar symptoms are to be found here. It seems to be either firmware or adapter settings related. I noticed that on Intel’s page there is a newer version of iwlwifi-3160-ucode-25.30.14.0.tgz versus iwlwifi-3160-ucode-16.242414.0.tgz on Linux Kernel’s page. I will follow this lead for the moment. If I feel that I need to spend too much time with it, I’m going to take more radical measures (install something else).
Update (5 min later):
It seems that version I downloaded from Intel’s page loads without problem.
I’ve been quite ignorant about it like a true “modern” cloud services consumer. Well, this weekend I managed to catch up a little bit with them. It looks like there’s both good and bad news. The good news is: our data is now hosted on Dropbox’s own storage infrastructure (out of AWS where it was originally hosted). Yes, I missed that announcement. For me as a data storage professional, the details of their Magic Pocket architecture are quite interesting from a technical point of view. I’ve been reading before about somewhat similar approach (implemented by other provider/vendor). Now the bad news is about their ideas for future. They call it “Project Infinite” and you may read all the glory details here. Why I call it bad news? Simply because I share most of the views already expressed by other people on their page comments section. I’m already uncomfortable with Oracle’s VirtualBox kext. I don’t need more.
This article is intended for people who like to play with their laptops at home, taking advantage of the CPU‘s virtualization capabilities therefore firing up several VMs. I assume everybody know by now what a VM is. The story is more compelling to those using a laptop with an internal SSD drive which (due to higher cost) has usually less usable storage capacity as opposed to the old school HDD. I like to keep a Windows-based VM around to test random stuff from time to time or even use some specific tools which either don’t have yet a good Unix/Linux alternative or just because I’m too lazy. My Hypervisor of choice is Oracle’s VirtualBox due to its simplicity and user friendliness. My Windows 7 VM (more specifically the virtual disk used by it a.k.a. “W7_SystemDisk.vdi” file) has recently grown to 30 GiB which is a lot for my limited SSD capacity. Therefore, I decided that I must do something about it. My decision was grounded on the fact that I rarely use this VM (like once a week) and I have more important ones to build and run (i.e. try new Linux distribution releases). Given that my laptop has a built-in SD slot, I went and bought a Samsung micro SDXC card to use it as a new home for my afore-mentioned VM.
Samsung Pro 64
Of course one can use a standard USB flash drive for the very same purpose. Next important choice I had to decide upon was of course: which file system to use? There is always a trade-off between usability (portability) and performance. As I knew that performance will drop much after moving my VM on this little thing, I quickly made up my mind: performance is my priority. Also since I recently found a nice, tiny utility for Windows which I wanted to test, I had the perfect opportunity to do so.
My VM system disk (C:) was looking like this:
Drive C: Properties
Here is what I did next:
Benchmark #1
– format the SDXC card using HFSPlus
– copied entire “TestVM_Folder” under root
– start VM from SDXC then ran the Parkdale
Parkdale Default Settings
Windows 7 performance result
VirtualBox VDI (NTFS-inside) on top of HFSPlus
Benchmark #2
– format the SDXC card using NTFS
– copied entire “TestVM_Folder” under root
– start VM from SDXC then ran the Parkdale
Windows 7 performance result
VirtualBox VDI (NTFS-inside) on top of NTFS
Benchmark #3
– format the SDXC card using ExFAT
– copied entire “TestVM_Folder” under root
– start VM from SDXC then ran the Parkdale
Windows 7 performance result
VirtualBox VDI (NTFS-inside) on top of ExFAT
Conclusion:
ExFAT is/was the best choice for me as my priority was performance.
One may go further and apply some well-known NTFS performance hacks.
I had to revisit and update this post to make it more useful. Initially, it was about Mozilla Firefox browser throwing the error ssl_error_no_cypher_overlap back at its users when trying to access certain URLs.
Nowadays, current Firefox ESR simply reports a Secure Connection Failed leaving the user to deal with its frustration.
The well-known quick & dirty workaround is described below:
1) Open configuration page “about:config” and search for item “security.tls.version.min“.
2) Double click to edit the configuration item mentioned earlier and set its value to “0” (zero).
Visual Help
WARNING! Even if the above will ‘solve’ your trouble it’s worth understanding the real issue here.
The change was required for security reasons, specifically, as a reaction to recent ‘POODLE‘ vulnerability.
Oh, and the not so good news is the fact that this is also related to SSL.
Yes, when we were just slowly recover after ‘HEARTBLEED‘ frenzy.
OK, so what is new here? Well, I recently learned that the above trick may no longer suffice. But why?
Because there is a limit for everything, including how many times Mozilla will allow you to weaken its security.
Luckily, we can modify this as well. Just search for security.tls.version.fallback-limit and set it to “0” zero.
If you have an older version of Firefox, this article is also worth reading.
New year, new toys, new people, new love stories, a whole new world to discover, explore and learn from. This is my first blog entry for 2014 and I hope the title will fit well the content. Most people don’t care too much about the things geeks do, and why should they after all? A geek member or close friend could be found in almost any family today. An average person don’t value or care too much about its own digital/electronic data (unless it lost some). Digital data ownership itself has become a bit blurry these days. Most of the geeks do care about it which is why they are generally more knowledgeable in how to store it, transfer it and protect it. Data=Information=Power. Do I make any sense till now? I do hope so. But remember that power corrupts. We don’t question much and just take everything for granted. Privacy? Yes, there are concerns and issues but it is normal and natural. We are not supreme beings, we are not perfect. Consequently neither our modern society is or it will ever be. But we posses the ability to adapt. Well that being said I think my introduction is over. Adapt by learning a “hot” new way for data transfer. “Hot” in the modern non-dictionary definition, means “popular”. Everyone heard about cloud storage as offered by many service providers like Amazon, Google, Dropbox, Bitcasa, Microsoft, etc. But what all these service providers have in common? They host your data. Remember the question marks about privacy and ownership? What if you care so much about your data that you do not trust them enough nor you want to waste your time encrypt it first, before let them host it? Well you could simply cut them off the loop. Here is how, http://bit.ly/1e2mkzg
A short note for the closure of this article: the good old protocols will still be around (nntp, ftp, sftp, ftps, irc, rsync, etc.) as well as the network/distributed file systems (afs, nfs, cifs, dfs, hadoop, glusterfs, gpfs, etc.). I hope you found this info as useful as I did.