Unfortunately, Google, via Chrome, is forcing HTTPS on .dev domains. They are doing this for their own internal reasons. It’s just a bit annoying for web developers though. Here’s the full article.
Easiest solution moving forward, don’t use .dev.
Unfortunately, Google, via Chrome, is forcing HTTPS on .dev domains. They are doing this for their own internal reasons. It’s just a bit annoying for web developers though. Here’s the full article.
Easiest solution moving forward, don’t use .dev.
So, some of you may have gotten MAMP to work happily with self-generated SSL certificates. It’s a bit tricky and I’ll assume you’ve got that working.
… a quick tip on getting OS X to shut down the default installed apache so Mamp can run on port 80 and 443:
(found here… https://gist.github.com/jfloff/5138826 )
First of all you need to be able to run MAMP in port 80. This is a “heat check” if you don’t have any process jamming http ports. You can check it like this:
sudo lsof | grep LISTEN
If you do happen to have any process with something like this *:http (LISTEN), you are in trouble. Before with adventure check if it isn’t MAMP itself (yeah, you should close that beforehand)
ps
# I've forced the removal of the job
$ launchctl remove org.apache.httpd
# and load it again
$ launchctl load -w /System/Library/LaunchDaemons/org.apache.httpd.plist
# and unload it again
$ launchctl unload -w /System/Library/LaunchDaemons/org.apache.httpd.plist
Now you should be able to use port 80 (and almost any other) in MAMP. Just go to MAMP > Preferences > Ports Tab and click the Set to default Apache and MySQL ports.
….now back to SSL certs
However, there’s a new wrinkle. Chrome and FF both have decided that self-signed certificates need to be of the Version 3 variety, rather than the plain old ones generated by MAMP. I ran into an issue where chrome was complaining about a missing subjectAltName in the certificate that I had set up.
So, here’s the article I used to get my stuff sort of working:
https://alexanderzeitler.com/articles/Fixing-Chrome-missing_subjectAltName-selfsigned-cert-openssl/
Here’s another version of that:
How to Create Your Own SSL Certificate Authority for Local HTTPS Development
OMG, you say, that’s like waaaaaaa? No worries, I’ll help break it down here and do it a little differently.
They have you create all sorts of scripts. I’m not sure why, probably because it’s the right way to do it, but here’s the straight forward way to set up.
What you are doing is creating your own CA certificate (aka a certificate authority), then using that to create a certificate for your site that needs ssl.
In the following directions, you need to replace YOURLOCALSITEDOMAIN with the domain your are setting up on your MAMP server. You know, like mysite.dev, or sams-site.dev, etc…
Go to the directory, where you store your ssl certificates for MAMP and do the following:
STEP 1
On the command line type out the following:
openssl genrsa -des3 -out rootCA.key 2048
openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem
– this sets your server up to be a CA certificate issuer
– it’s going to ask you a bunch of questions about the country, state, city, and other things. Just answer them with your own info 🙂 The questions will be similar to the parameters you see in the [dn] section in the code below.
STEP 2
Create a file called YOURLOCALSITEDOMAIN.csr.cnf with the following:
[req]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
[dn]
C=US
ST=New York
L=Rochester
O=End Point
OU=Testing Domain
emailAddr[email protected]
CN = YOURLOCALSITEDOMAIN
– This is a configuration file that will be used when generating your specific site certificates. Change the ST, L, email parameters to whatever you want. I’d go ahead and use your own email.
STEP 3
Then, create a file called v3.ext with the following:
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = localhost
DNS.2 = YOURLOCALSITEDOMAIN
– This is the file that is used by the CA issuer to ensure your cert is version 3 and offers up the named domain as you see in the parameter DNS.2.
STEP 4
Then generate the certificates with this!!! On the command line type out the following (don’t forget to replace the YOURLOCALSITEDOMAIN with whatever development domain you are using:
openssl req -new -sha256 -nodes -out YOURLOCALSITEDOMAIN.csr -newkey rsa:2048 -keyout YOURLOCALSITEDOMAIN.key -config <( cat YOURLOCALSITEDOMAIN.csr.cnf )
openssl x509 -req -in YOURLOCALSITEDOMAIN.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out YOURLOCALSITEDOMAIN.crt -days 5000 -sha256 -extfile v3.ext
Now, when you need to get a second site working, you'll repeat steps 2 through 4. HOWEVER, you won't recreate the v3.ext file. You'll just add a new DNS parameter with your new domain. So, in the above example, I'd be adding DNS.3 = NEWSITEDOMAIN. You'd add a new DNS parameter for each new secure site you do.
STEP 5
Now, open your keychain access app in OS X and add your new certs, then set them to always be trusted. That way your mac will stop throwing warnings. Also, if you are looking at your site in the CodeKit Bonjour URL, then you'll need to add the Temp SSL certificate Codekit creates. You'll find that in the My Certificates section of the Keychain Access app.
I also ran into a thing with iThemes Security. The .htaccess rules were causing redirect loops for the SSL. You could get to the home page, but no secondary pages. Secondary pages resulted in a 500 error. Replacing the iThemes Security SSL feature with the plugin, 'really simple ssl', then clearing out the config that iThemes put in the .htaccess file cleared that right up.
OMG, that made your brain hurt, right? It made mine hurt for a bit too, but hopefully all is working for you now.
Ok, so on April 1, 2015…no kidding…Wordpress announced they are officially supporting the newer charset utf8mb4 in WP version 4.2. This charset is available in mySQL servers version 5.5.3 and higher (I think). This is cool and all, rather hipster, and up to date with best security practices and all that. However, what they completely missed was the fact that a huge portion of the shared server world is running mySQL versions 5.0 and 5.1. If you try to migrate a WordPress site from a server (or local box) that is appropriately running something remotely current to a shared hosting account with the older database, your migration will fail.
Here’s the article where they describe the changes and the beginning of the conversation about how it’s an issue…
https://make.wordpress.org/core/2015/04/02/the-utf8mb4-upgrade/
I’m sure there are thousands of developers wasting time figuring out how to fix this issue. I’ve not found an ‘easy’ fix. Here’s mine clunky version of migrating from development to staging/production…
This process assumes you’ve migrated the source code of WordPress already.
You can blame ISPs for not getting current, but I think trying to upgrade these older mySQL servers to something more current is an enormous headache for clients and ISPs a like. WordPress should really have paid attention to the reality of this situation and taken measures to mitigate the issue.
Happy hunting for an easier solution, but if you need, the above will work.
I’ve recently encountered an issue twice in the last few weeks where the font-face fonts on a website stopped working after a migration to a new hosting provider.
I used the nifty tool, Backup Buddy from iThemes, and all went well, except on the new server the embedded fonts weren’t rendering. :/
After many hours of searching, checking headers, and such, I bothered to look at the console output in Chrome and noticed that there was an error speaking to cross-domain issues specifically regarding the font files.
So what was the issue? In WordPress, you can set the WordPress Address and the Site Address separately. If one is www.somedomain.com and the other is somedomain.com (aka the sub-domain doesn’t match) then you’ll get this pesky error.
Hope that helps.
Hey, this is NOT a comprehensive guide to setting up your own development environment, but I thought I’d post a few pitfalls I’ve discovered.
Most macs are set up out of the box to run apache found in /etc/apache2/. Normally, the system user and group for apache is _www. If you try to set up sites in /usr/HOMEDIR/Sites, it’ll likely give the files there the ownsership:group of YOURUSER:staff (or something else than staff). It either needs to be _www:_www OR YOURUSER:_www (and then you change the httpd.conf file to reflect YOURUSER for the User variable).
Also to edit any system files like that, be sure to edit as a super user or sudo.
A client of mine had an encounter with a nasty hack using an installation of the theme/plugin, optimize press, on a WordPress site. The hack gives over control of your hosting account. Here’s an indepth write up…
Ok, so ever since I upgraded to Mavericks from Mountain Lion, I’ve had crazy issues dealing with font files that I’ve downloaded. At first, I thought it might be some sort of font issue with Mavericks, but it turns out it’s a quarantine issue with downloaded files.
For a bit now, OSX quarantines files that are downloaded. It’s a nice security feature, but dang, it’s really messing me up :/ I’m not sure yet why my box is behaving this way. I’m sure there’d be massive outcry if this were a rampant issue.
When a file is downloaded, some extra meta is added to it. You can tell from the command line as the permissions look something like this when you list all aka ‘ls -la’ …
drwxr-xr-x@
The work around I’ve pieced together that I have to do each time I download a zip file is to unpack it, then run xattr on it to remove the quarantine flag. Here’s an example of a style.css file that I ran this on to allow scripts on my box to see the file.
xattr -d -r com.apple.quarantine style.css
or
xattr -dr com.apple.quarantine style.css
What should happen is when I click to unzip the zip file, I should get a GUI alert that asks me if I really want to open this file from the internet. Sadly, it’s not happening on my box.
Anyway, hope this helps the random person out there searching for a possible solution.
It would be helpful to let you know how to actually find the bit of data to remove. Above you see com.app.quarantine. That’s the metadata you need to remove. To find it simply type…
xattr somefilename
That’ll output a string which you’d put in place of ‘com.apple.quarantine’ as seen above.
So, today I was helping out a colleague with some WordPress installations. When adding plugins or updating ones that were there, I encountered this well commented but not resolved error…
PCLZIP_ERR_BAD_FORMAT (-10) : Unable to find End of Central Dir Record signature
Now, there’s a bunch of folks talking about this issue, with solutions ranging from not enough disk space, to permissions issues.
This issue occurred for me on a mediatemple grid server. I was not able to upload or upgrade any WordPress plugins on new or older versions of WordPress. This occurred in WordPress installs that did not have this problem before. Further, if I was on the server via the commandline, I could scp the zip files up and unzip them. The issue was only within the WordPress GUI. So, my guess is whatever process WordPress uses to unzip plugins is what’s causing the problem.
I’m pretty sure that the issue came about when the ZIP package was globally updated on the server recently. Since the php.ini file for the account was customized in some way, and it didn’t reference the correct zip extension, unpacking zip files via the WordPress GUI failed.
When I added ‘extension = zip.so’ to the php.ini file, this resolved the issue. I don’t know that this is the perfect solution for anyone else, but just another data point for folks looking for info.
Hope this helps someone out there 🙂
As a web developer, I often encounter clients who have a hosting package that is limited, or ‘secured’ by the hosting provider. That means I sometimes am forced to use the dreaded FTP for file transfers rather than SCP.
This is ok when it’s just a few files here or there. In fact, using the GUI can sometimes be convenient. However, if I’m doing large scale development, I’d rather copy the site over to my development server rather than work locally and fill up my hard drive.
If the client’s hosting provider does not allow SSH access, then you can use FTP from the commandline 🙂
ftp ftp.example.com will get you there. Then depending upon your flavor of linux, you can use MGET to pull files. Sometimes you are even offered the awesomeness of RECURSIVE MGET *.
When you are not afforded that goodness, I’ve discovered that WGET does the trick even better 🙂
wget -r “ftp://user:[email protected]/somedirectory”
That’ll recursively get it all for you 🙂 Better yet…mirror
wget -m “ftp://user:[email protected]/somedirectory”
That initiates recursive and gives you infinite depths on directories…and…gives you the same timestamps as exists on the remote server.
Nice stuff.
Today, I’m setting up a client for web hosting. I have two favorite hosting companies for the clients that are mindful of their costs. One is Site5.com, and the other is Media Temple. Both are great companies with excellent service and servers.
Unfortunately, putting an SSL certificate into the fray manages to make them both, well, less palatable.
Site5 does not sell SSL certs, which requires that you purchase the cert from a third party and have it installed on Site5. Yes, of course that’s doable, but rather a complicated affair. When thinking in terms of the longevity of the account, the likelihood that the client will have to deal with renewals in the future, and all that, it’s just setting the client up for hard times in the future.
Mediatemple offers SSL certs, but for $120/year. Uhm, wow, well, jeez, that’s almost twice as much as the industries leading hosting rapist, Godaddy. So that just stinks.
So I’m pondering what to do at this point. Bluehost….meh. 1and1….meh. Some smaller outfit with great prices….meh.  ………………. OMG…Godaddy? Gulp.
I feel like a lemming right now. *sigh*