Search Results

Search found 8 results on 1 pages for 'thebluefox'.

Page 1/1 | 1 

  • amazon ec2 ubuntu with gitlab and nginx - cant load?

    - by thebluefox
    Ok, so I've spooled up an Amazon EC2 server running Ubuntu, and then followed the instructions below to install GitLab; http://doc.gitlab.com/ce/install/installation.html The only step I've not been able to complete is running the following check on the status; sudo -u git -H bundle exec rake gitlab:check RAILS_ENV=production I get the following error; rake aborted! Errno::ENOMEM: Cannot allocate memory - whoami Which I presume is becuase my EC2 is just running a free tier setup, so isn't that well spec'd. Regardless, I've been trying to access this through my browser. I've set up the elastic IP and pointed my domain at it (for the purpose of this, lets say its git.mydom.co.uk). Doing a whois on this domain shows me its pointing to the right place. For some reason though, I get the "Oops, Chrome could not connect to git.mydom.co.uk". Now - for a period of time I was getting the Nginx holding page (telling me I still needed to perform configuration). This though disappeared after removing the default file from /etc/nginx/sites-enabled/ (after reading this could be issue on a troubleshooting page). Since then, I've had nothing, even when I symlinked the file back in from /sites-available. I've tried changing the owner of the git.mydom.co.uk file sat inside /sites-enabled and /sites-available to www-data, as suggested here, but I could only change the permission of the file in /sites-available, and not the symlinked one in /sites-enabled. The content of this file is as follows; upstream gitlab { server unix:/home/git/gitlab/tmp/sockets/gitlab.socket; } server { listen *:80 default_server; # e.g., listen 192.168.1.1:80; In most cases *:80 is a good idea server_name git.mydom.co.uk; # e.g., server_name source.example.com; server_tokens off; # don't show the version number, a security best practice root /home/git/gitlab/public; # Increase this if you want to upload large attachments # Or if you want to accept large git objects over http client_max_body_size 20m; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } All the paths mentioned in here look ok...I'm about at the end of my knowledge now!

    Read the article

  • JQuery PrettyPhoto/Lightbox with Loupe Magnifier

    - by thebluefox
    Morning gang, Right, I'm trying to crowbar a magnifier, like this one, into prettyPhoto (picked because the JS is nice). The trouble I'm having is initiating the loupe function when the prettyPhoto has loaded. If I include it in the prettyPhoto JS, it just gets itself into an endless loop, or doesn't get called at all. I've nearly got it working by putting a link next to the close button that calls the function inline, like so; <a href="#" onclick="$(\'.TB_Image\').loupe(); return false;">Magnify</a> The only problem here is that the return false doesn't work? It works when the call to loupe isn't in the onclick event though? Doing it this way does run the loupe function, but the homepage gets loaded so I don't know if it actually works or not. Has anyone got any sugestions at all? All help much appreciated!

    Read the article

  • Using Solr and Zends Lucene port together...

    - by thebluefox
    Afternoon chaps, After my adventures with Zend-Lucene-Search, and discovering it isn't all its cracked up to be when indexing large datasets, I've turned to Solr (thanks to Bill Karwin for that :) ) I've got Solr indexing the db far far quicker now, taking just over 8 minutes to index a table of just over 1.7million rows - which I'm very pleased with. However, when I come to try and search the index with the Zend port, I run into the following error; Fatal error: Uncaught exception 'Zend_Search_Lucene_Exception' with message 'Unsupported segments file format' in /var/www/Zend/Search/Lucene.php:407 Stack trace: #0 /var/www/Zend/Search/Lucene.php(555): Zend_Search_Lucene-_readSegmentsFile() #1 /var/www/z_search.php(12): Zend_Search_Lucene-__construct('tmp/feeds_index') #2 {main} thrown in /var/www/Zend/Search/Lucene.php on line 407 I've tried to have a search around but can't seem to find anything about this problem, everyone just seems to be able to get them to work? Any help as always much appreciated :) Thanks, Tom

    Read the article

  • PHP/JQuery/AJAX, few teething problems...

    - by thebluefox
    Morning/Afternoon guys. Writing some JQuery AJAX shizz and getting a bit stuck. I've got the actual proccess of calling the php file done perfectly, its just trying to get the html on the page to change in a way I want it to. I want to get rid of the a with the id of the one used in the ajax call etc, and replace it with the html passed from the PHP file. Code is as follows... $(".save_places").click(function() { $.ajax({ url: "{/literal}{$sRootPath}{literal}system/ajax/fan_bus.php", type: "POST", data: ({id : this.getAttribute('id')}), dataType: "html", success: function(msg){ $(this).before(msg); $(this).empty(); alert(msg); } }); return false; }); And the HTML is pretty simple; <p class="links"> <a href="#" class="save_places" id="bus_{$businesses.results[bus].id}_{$sMemberDetails.id}"><img src="{$sThemePath}images/save_places.png" alt="Save to My Places" /></a> <a href="#"><img src="{$sThemePath}images/send_friend.png" alt="Send to a Friend" /></a> </p> All the stuff in the success function is experimental mashing of code, any help please? Thanks as always.

    Read the article

  • Indexing large DB's with Lucene/PHP

    - by thebluefox
    Afternoon chaps, Trying to index a 1.7million row table with the Zend port of Lucene. On small tests of a few thousand rows its worked perfectly, but as soon as I try and up the rows to a few tens of thousands, it times out. Obviously, I could increase the time php allows the script to run, but seeing as 360 seconds gets me ~10,000 rows, I'd hate to think how many seconds it'd take to do 1.7million. I've also tried making the script run a few thousand, refresh, and then run the next few thousand, but doing this clears the index each time. Any ideas guys? Thanks :)

    Read the article

  • Saving a remote image with cURL?

    - by thebluefox
    Morning all, Theres a few questions around this but none that really answer my question, as far as I ca understand. Basically I have a GD script that deals with resizing and caching images on our server, but I need to do the same with images stored on a remote server. So, I'm wanting to save the image locally, then resize and display it as normal. I've got this far... $file_name_array = explode('/', $filename); $file_name_array_r = array_reverse($file_name_array); $save_to = 'system/cache/remote/'.$file_name_array_r[1].'-'.$file_name_array_r[0]; $ch = curl_init($filename); $fp = fopen($save_to, "wb"); // set URL and other appropriate options $options = array(CURLOPT_FILE => $fp, CURLOPT_HEADER => 0, CURLOPT_FOLLOWLOCATION => 1, CURLOPT_TIMEOUT => 60); // 1 minute timeout (should be enough) curl_setopt_array($ch, $options); curl_exec($ch); curl_close($ch); fclose($fp); This creates the image file, but does not copy it accross? Am I missing the point? Cheers guys.

    Read the article

  • How to code a 'Next in Results' within search results in PHP

    - by thebluefox
    Right, bit of a head scratcher, although I've got a feeling there's an obvious answer and I'm just not seeing the wood for the trees. Baiscally, using Solr as a search engine for my site, bringing back 15 results per page. When you click on a result, you get a detail page, that has a "Next in Results" link on it, which obviously forwards you on to the next result. Whats the best way of doing this? I've come up with a few solutions but they're either too inpractical or just don't work. I could store all the ids in a session array, then grab the one after the current one and put that in the link. But with possibly hundreds/thousands of results, the memory that array would need, and the performance hit of dealing with it isn't practical. I could take the same approach and put it into the db, but I'll still have to deal with a potentially huge array when I grab them out of the db. Or; I could do the search again, only returning the id's, and grab the one after the one we're currently looking at. I think this could be the best option? Although it does seem kind of messy, namely because of when I have to select the id thats on a different 'page' (ie the 16th, 31st etc result). Unless I pass through where it was in the results, and select from there, but that still doesn't seem like the right way to do it. I'm really sorry if this is just complete nonsense, any help is massively appreciated as always, Cheers guys!

    Read the article

  • Refining Solr searches, getting exact matches?

    - by thebluefox
    Afternoon chaps, Right, I'm constructing a fairly complex (to me anyway) search system for a website using Solr, although this question is quite simple I think... I have two search criteria, location and type. I want to return results that are exact matches to type (letter to letter, no exceptions), and like location. My current search query is as follows ../select/?q=location:N1 type:blue&rows=100&fl=*,score&debugQuery=true This firstly returns all the type blue's that match N1, but then returns any type that matches N1, which is opposite to what I'm after. Both fields are set as textgen in the Solr schema. Any pointers? Cheers gang

    Read the article

1