Daily Archives

Articles indexed Wednesday November 23 2011

Page 10/15 | < Previous Page | 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • http post request with cross-origin in javascript

    - by Calamarico
    i have a problem with a http post call in firefox. I know that when there are a cross origin, firefox first do a OPTIONS before the POST to know the access-control-allow headers. With this code i dont have any problem: Net.requestSpeech.prototype.post = function(url, data) { if(this.xhr != null) { this.xhr.open("POST", url); this.xhr.onreadystatechange = Net.requestSpeech.eventFunction; this.xhr.setRequestHeader("Content-Type", "application/json; charset=utf-8"); this.xhr.send(data); } } I test this code with a simple html that invokes this function. Everything is ok and i have the response of the OPTIONS and POST, and i process the response. But, i'm trying to integrate this code with an existen application with uses jquery (i dont know if this is a problem), when the send(data) executes in this case, the browser (firefox) do the same, first do a OPTION request, but in this case dont receive the response of the server and puts this message in console: [18:48:13.529] OPTIONS http://localhost:8111/ [undefined 31ms] Undefined... the undefined is because dont receive the response, but the code is the same, i dont know why in this case the option dont receive the response, someone have an idea? i debug my server app and the OPTIONS arrive ok to the server, but it seems like the browser dont wait to the response. edit more later: ok i think that the problem is when i run with a simple html with a SCRIPT tag that invokes the method who do the request run ok, but in this app that dont receive the response, i have a form that do a onsubmit event, i think that the submit event returns very fast and the browser dont have time to get the OPTIONS request. edit more later later: WTF, i resolve the problem make the POST request to sync: this.xhr.open("POST", url, false); The submit reponse very quickly and can't wait to the OPTION response of the browser, any idea to this?

    Read the article

  • My Speaking Engagements in the Last Two Months

    - by gsusx
    I’ve been so busy lately with the activities around Moesion that I haven’t had time to blog about a couple of great conferences I had the opportunity to speak at in the last two months. Software Architect Conference, UK ( http://www.software-architect.co.uk/ ) This conference is becoming one of my favorite events of the year. As always Nick Payne and his team did a remarkable job lining up an all-star group of speakers that covered some of the hottest topics in today’s software industry. The first...(read more)

    Read the article

  • SkyDrive and Consumer Cloud Services

    - by Tim Murphy
    Paul Thurrrott recently posted an article on the future of SkyDrive and I was asked what I thought about its future by @UserCommunity.  So let’s take a look. The breakdown from Microsoft that Paul described I believe is an accurate representation of users and usages. While I can’t say that I leverage SkyDrive to the extent that it was meant to be I do enjoy having OneNote hosted their and being able to consult and edit it from the desktop, web and Windows Phone. Taking that one step further is the Midwest Geeks group which started as the community of Microsoft related user groups in our region uses SkyDrive groups and shares calendars and documents.  This collaboration aspect isn’t new in itself, but having it connected with the rest of your cloud assets makes life easier. Another recent usage of this type of cloud service is storing your personal music files in order to get that same universal access.  This is a scenario that has some arguments for and against.  On the one hand own once and listen anywhere is great, but the on the other hand the bandwidth cost becomes a giant downside.  This is especially the case since most carriers are now doing away with unlimited data packages. Ultimately I see this type of resource growing an evolving at a phenomenal rate over the next few years as we continue to become more mobile.  Having multiple players such as SkyDrive and iCloud will only help to give us more options.  Only time will tell where we end up next. del.icio.us Tags: SkyDrive,Cloud Services,Paul Thurrott,UserCommunity

    Read the article

  • Take care to unhook Anonymous Delegates

    - by David Vallens
    Anonymous delegates are great, they elimiante the need for lots of small classes that just pass values around, however care needs to be taken when using them, as they are not automatically unhooked when the function you created them in returns. In fact after it returns there is no way to unhook them. Consider the following code.   using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Diagnostics; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { SimpleEventSource t = new SimpleEventSource(); t.FireEvent(); FunctionWithAnonymousDelegate(t); t.FireEvent(); } private static void FunctionWithAnonymousDelegate(SimpleEventSource t) { t.MyEvent += delegate(object sender, EventArgs args) { Debug.WriteLine("Anonymous delegate called"); }; t.FireEvent(); } } public class SimpleEventSource { public event EventHandler MyEvent; public void FireEvent() { if (MyEvent == null) { Debug.WriteLine("Attempting to fire event - but no ones listening"); } else { Debug.WriteLine("Firing event"); MyEvent(this, EventArgs.Empty); } } } } If you expected the anonymous delegates do die with the function that created it then you would expect the output Attempting to fire event - but no ones listeningFiring eventAnonymous delegate calledAttempting to fire event - but no ones listening However what you actually get is Attempting to fire event - but no ones listeningFiring eventAnonymous delegate calledFiring eventAnonymous delegate called In my example the issue is just slowing things down, but if your delegate modifies objects, then you could end up with dificult to diagnose bugs. A solution to this problem is to unhook the delegate within the function var myDelegate = delegate(){Console.WriteLine("I did it!");}; MyEvent += myDelegate; // .... later MyEvent -= myDelegate;

    Read the article

  • Unit testing is… well, flawed.

    - by Dewald Galjaard
    Hey someone had to say it. I clearly recall my first IT job. I was appointed Systems Co-coordinator for a leading South African retailer at store level. Don’t get me wrong, there is absolutely nothing wrong with an honest day’s labor and in fact I highly recommend it, however I’m obliged to refer to the designation cautiously; in reality all I had to do was monitor in-store prices and two UNIX front line controllers. If anything went wrong – I only had to phone it in… Luckily that wasn’t all I did. My duties extended to some other interesting annual occurrence – stock take. Despite a bit more curious affair, it was still a tedious process that took weeks of preparation and several nights to complete.  Then also I remember that no matter how elaborate our planning was, the entire exercise would be rendered useless if we couldn’t get the basics right – that being the act of counting. Sounds simple right? We’ll with a store which could potentially carry over tens of thousands of different items… we’ll let’s just say I believe that’s when I first became a coffee addict. In those days the act of counting stock was a very humble process. Nothing like we have today. A staff member would be assigned a bin or shelve filled with items he or she had to sort then count. Thereafter they had to record their findings on a complementary piece of paper. Every night I would manage several teams. Each team was divided into two groups - counters and auditors. Both groups had the same task, only auditors followed shortly on the heels of the counters, recounting stock levels, making sure the original count correspond to their findings. It was a simple yet hugely responsible orchestration of people and thankfully there was one fundamental and golden rule I could always abide by to ensure things run smoothly – No-one was allowed to audit their own work. Nope, not even on nights when I didn’t have enough staff available. This meant I too at times had to get up there and get counting, or have the audit stand over until the next evening. The reason for this was obvious - late at night and with so much to do we were prone to make some mistakes, then on the recount, without a fresh set of eyes, you were likely to repeat the offence. Now years later this rule or guideline still holds true as we develop software (as far removed as software development from counting stock may be). For some reason it is a fundamental guideline we’re simply ignorant of. We write our code, we write our tests and thus commit the same horrendous offence. Yes, the procedure of writing unit tests as practiced in most development houses today – is flawed. Most if not all of the tests we write today exercise application logic – our logic. They are based on the way we believe an application or method should/may/will behave or function. As we write our tests, our unit tests mirror our best understanding of the inner workings of our application code. Unfortunately these tests will therefore also include (or be unaware of) any imperfections and errors on our part. If your logic is flawed as you write your initial code, chances are, without a fresh set of eyes, you will commit the same error second time around too. Not even experience seems to be a suitable solution. It certainly helps to have deeper insight, but is that really the answer we should be looking for? Is that really failsafe? What about code review? Code review is certainly an answer. You could have one developer coding away and another (or team) making sure the logic is sound. The practice however has its obvious drawbacks. Firstly and mainly it is resource intensive and from what I’ve seen in most development houses, given heavy deadlines, this guideline is seldom adhered to. Hardly ever do we have the resources, money or time readily available. So what other options are out there? A quest to find some solution revealed a project by Microsoft Research called PEX. PEX is a framework which creates several test scenarios for each method or class you write, automatically. Think of it as your own personal auditor. Within a few clicks the framework will auto generate several unit tests for a given class or method and save them to a single project. PEX help to audit your work. It lends a fresh set of eyes to any project you’re working on and best of all; it is cost effective and fast. Check them out at http://research.microsoft.com/en-us/projects/pex/ In upcoming posts we’ll dive deeper into how it works and how it can help you.   Certainly there are more similar frameworks out there and I would love to hear from you. Please share your experiences and insights.

    Read the article

  • Rebuilding RAID1 in Ubuntu

    - by John Utech
    I had my second HD in my RAID1 come up with bad sectors. So I got another drive and pulled out the bad sector drive and put the new drive in. With the original working RAID1 drive in the computer it failed to boot. I manually copied everything from the old drive over via a Gparted Live CD. Still no booting. Kind of scratching my head here as I can see that both of the drives have data on them but are unable to get either of them to boot. I used a Ubuntu live CD and couldn't even manually mount either of the drives, which I thought was really the odd part. Not sure where to go from here.

    Read the article

  • pam_exec.so PAM module does not export variable PAM_USER as stated in the documentation

    - by davidparks21
    I'm trying to use the pam_exec.so PAM module to execute a script which needs to know the username/password coming from the application (OpenVPN in this case). I have a script that executes printenv >>afile, but I don't see all the environment variables that the man pages states that pam_exec.so exports (namely PAM_USER I think), I only see the following: PAM_SERVICE=openvpn PAM_TYPE=auth PWD=/usr/local/openvpn/bin SHLVL=1 A__z="*SHLVL I do successfully pick up the password off of STDIN and output it with this same script. But for the life of me I can't get the username. Any thoughts on what I should try next?

    Read the article

  • How to get apache to look for files in different subfolders folders?

    - by prb
    I am definitely new to mod-rewrite stuff. Note:- here the URL is common, and all the folders and subfolders on same host. The url a user uses to access their page is http://myurl.com/1234/filename.jpg Here the name of the subfolder is an integer is unique and generated dynamically by another application. The subfolder stores images specific to an individual user. So the folder structure is as follows main1 = document root main2 is another folder within main1 or document root. /main1/1234/filename.jpg /main1/5678/filename.jpg /main1/2345/filename.jpg /main1/1212/filename.jpg /main1/main2/2367/filename.jpg /main1/main2/8790/filename.jpg /main1/main2/9966/filename.jpg So, I want to write a rewrite a rule so that if a user tries to type in http://myurl.com/1234/filename.jpg, the rewrite rule will need to look where the file is and serve the request; so for request http:/myurl.com/1234/filename.jpg the actual page is located at /main1/1234/filename.jpg and then need to serve that page from that folder. So, if another users makes a request as http://myurl.com/9966/filename.jpg, it should serve the page from the following destination /main1/main2/9966/filename.jpg Please let me know if the question is still not clear. This is what i have done so far and does not work at all. RewriteCond {DOCUMENT_ROOT}/%{REQUEST_FILENAME} -f RewriteRule ^(.*)$ {DOCUMENT_ROOT}/$1 [L] RewriteCond {DOCUMENT_ROOT}/main2/%{REQUEST_FILENAME} -f RewriteRule ^(.*)$ {DOCUMENT_ROOT}/main2/$1 [L] any help is really grateful

    Read the article

  • Optimising bare-metal hypervisor installation

    - by Stephen
    what is the best way to install a bare metal hypervisor (i.e. to host multiple VM's)? I spoke to a friend and he is using a HP server to host all his VMs with VMware ESXi, but he installed the VMWARE esxi software on a flash card. He can then use his full hard disk capacity from each drive for the VMs. Is this a pretty standard setup when configuring a bare-metal hypervisor? How do you guys do it, and what is best?

    Read the article

  • Distributing Files using a Group Policy on Windows Server 2003

    - by tonedeath
    A piece of software that we use at our office has recently moved to a new licensing system. This means that from now on a new set of license key files will need to be distributed to each of our 25 client installations every year. All of the clients run XP and are part of an AD domain controlled by a Windows 2003 DC. I'm already using group policies to deploy software updates. I gather that this is possible with Group Policy Preferences in Server 2008. I'm just looking for a good method using Server 2003. The same set of files need copying to each client. I also have them hosted on a network share accessible by each client. I'm more of a *nix person, so I'm not particularly up on scripting in a Windows environment.

    Read the article

  • Apache Getting Bogged Down By Certain Script (Wp-Cron.php) - How To Kill Process Automatically

    - by user50037
    I have a server that is running a number of wordpress blogs, and a number of them have several hundred/thousand posts. Every couple of days, the server slows to a crawl due to a file being run on Wordpress called WP-cron.php. My entire apache process log turns into this : http:// imgur.com/A7K9k.png Times that by quite a bit. And server no go. Each process takes up about 1.1% of ram. And when we have 50 of them on the go. It gets insane. Not all of them are coming from the same blog, they are pretty widespread. In the Apache process page of WHM, they are usually ALL set to the status of "C", which means closing. But they can sit there until they crash the server, and they still hold the memory. Just google "wp-cron.php load" and you will find plenty of people with similar issues. In anycase, we have think it is down to users adding a tonne of dead "pinglists" to their wordpress installation. Which in turn wordpress loops through them endlessly. Problem number 1. Does anyone have any other suggestions about what would cause the Wordpress file wp-cron.php to loop endlessly. I still think it is down to pings, because all of the people we have contacted about their account load going sky high, have had massive ping lists. Problem number 2. Even if it is down to excessive pinglists in wordpress. We cannot be babying every single account on the server waiting for it to start spawning the wp-cron processes. It often happens overnight, and I start getting SMS alerts at 2am about the load. I have CSF installed, which apparently would have ended the processes if they ran over XXX time. But I have been told that it won't catch the processes because they end up in this state of "closing" (They show up as "C" on the Apache page of WHM). Apparently CSF will only kill processes that are "running" which C does not count. I have seen various other scripts such as : http://dltj.org/article/die-apache-die/ . I took a look at the stat of /proc. But I was boggled at which delimited part was the time running. And if there was any way I could connect it back to an actual Apache process, so that I could see what file was running (So only close connections connected to wp-cron.php, with a state of "C"). Overall I know Problem 2 glosses over the real reason. But I do put the whole thing to excessive pinglists in Wordpress. But I just cannot sit there and babysit every single installation 24/7. So I need a way to save the server when I am not available. Any help would be much appreciated.

    Read the article

  • Win XP Pro, IIS 5.1, PCI Compliance

    - by Mudman266
    I have a client that was scanned and determined not to be PCI Compliant. I looked and they had IIS setup to allow a program from central office to push/pull info from their server. Many of the reasons they failed appeared to have been fixed in SPs (they were on SP2) or security updates. I fully patched the server to (Windows XP Pro) SP3 with all optional updates. I had them scan again and again they failed with only one less vulnerability that I manually corrected (server was showing debugging/error messages). The main issue I'm having is that when I research the CVE code for each error, they say they are fixed in SP2 and up. I'm wondering if I need to remove IIS and resetup since I have patched to SP3. Any ideas?

    Read the article

  • Run Microsoft SCCM Remote Control Viewer on Client Machines?

    - by David Mackey
    I've install SCCM 2012 on a server and have successfully used the Remote Control option to take control of a system I've setup to be managed by SCCM. Now, I don't want to have to log in to a server every time I want to access this client...is there a way to run the Remote Control Viewer client on my desktop OS so I can take remote control of systems rather than having to remote in from the server? This seems like very basic functionality...but I haven't been able to figure it out thus far.

    Read the article

  • How to configure S3 or DNS to handle incomplete name (sans www) for web site?

    - by user193116
    I have a set up a bucket called "www.mydomainname.com" to host my website and I have configured the CNAME such that "www.mydomainname.com" points to the my endopint http://www.mydomainname.com.s3-website-us-east-1.amazonaws.com/ It works and when people who type the the full url "www.mydomainname.com" are able to see my index page But most people are in the habit of typing incoplete domain name -- they just type "mydomainname.com" and their browser fails to find my site. Is there a way to configure CName or S3 bucket such that typing "mydomainname.com" take them to my s3 website ? (I am using Networksolutions as my DNS provider).

    Read the article

  • How to show users the reason for a message being bounced or rejected by Postfix?

    - by Ross Bearman
    A user would like to be able to view a web page showing any emails that a Postfix server has either been unable to send, or unable to receive. For example if the user was supposed to receive an email from a third party but it hasn't arrived, they'd be able to check the web page and see a list of emails rejected by Postfix, along with a clear reason as to why. I've been unable to find an existing application that offers this functionality. Does anyone know of any, or is the best way forward to write a script that parses the log and display the results?

    Read the article

  • How to setup multiple Apache SSL sites using multiple IP addresses

    - by Jeff
    How do you setup a single Apache2 config to host multiple HTTPS sites each on their own IP address? There will also be multiple HTTP sites on just a single IP address. I do not want to use Server Name Indication (SNI) as described here, and I'm only concerned with the important top-level Apache directives. That is, I just need to know the skeleton of how my config should look. The basic setup looks like this: Hosted on 1.1.1.1:80 (HTTP) - example.com - example.net - example.org Hosted on 2.2.2.2:443 (HTTPS) - secure.com Hosted on 3.3.3.3:443 (HTTPS) - secure.net Hosted on 4.4.4.4:443 (HTTPS) - secure.org And here are the important config directives I have so far, which is the closest I've come to a working iteration, but still no dice. I know I'm close, just need a little push in the right direction. Listen 1.1.1.1:80 Listen 2.2.2.2:443 Listen 3.3.3.3:443 Listen 4.4.4.4:443 NameVirtualHost 1.1.1.1:80 NameVirtualHost 2.2.2.2:443 NameVirtualHost 3.3.3.3:443 NameVirtualHost 4.4.4.4:443 # HTTP VIRTUAL HOSTS: <VirtualHost 1.1.1.1:80> ServerName example.com DocumentRoot /home/foo/example.com </VirtualHost> <VirtualHost 1.1.1.1:80> ServerName example.net DocumentRoot /home/foo/example.net </VirtualHost> <VirtualHost 1.1.1.1:80> ServerName example.org DocumentRoot /home/foo/example.org </VirtualHost> # HTTPS VIRTUAL HOSTS: <VirtualHost 2.2.2.2:443> ServerName secure.com DocumentRoot /home/foo/secure.com SSLEngine on SSLCertificateFile /home/foo/ssl/secure.com.crt SSLCertificateKeyFile /home/foo/ssl/secure.com.key SSLCACertificateFile /home/foo/ssl/ca.txt </VirtualHost> <VirtualHost 3.3.3.3:443> ServerName secure.net DocumentRoot /home/foo/secure.net SSLEngine on SSLCertificateFile /home/foo/ssl/secure.net.crt SSLCertificateKeyFile /home/foo/ssl/secure.net.key SSLCACertificateFile /home/foo/ssl/ca.txt </VirtualHost> <VirtualHost 4.4.4.4:443> ServerName secure.org DocumentRoot /home/foo/secure.org SSLEngine on SSLCertificateFile /home/foo/ssl/secure.org.crt SSLCertificateKeyFile /home/foo/ssl/secure.org.key SSLCACertificateFile /home/foo/ssl/ca.txt </VirtualHost> For what it's worth, I prefer to have each of my SSL sites on their own IP instead of including one of them on the primary VHOST IP. Any links which show a standard setup would be more than welcome!

    Read the article

  • Video streaming over multi display units

    - by ramdaz
    We have to share video across around 4/8 terminals at a public facility where we need to display live video from within the facility, as well as display messages(advertisements), and also play videos(not live) which need to be controlled centrally from another location. We can do central location handling over Internet, over ssh. What we want to do is connect cameras to a computer, and use the computer to display over multiple display units. We need to do live titling if possible. Once the live local telecast which usually takes about an hour or two a day, we would like to play other videos locally off the PC server. Preferably everything should run off Linux, since budgets are very constrained.... Addendum -- Its not over WAN, it's over a local area. I prefer not using LAN, we would rather use co-axial cable if possible. The reason is if it's LAN, I need some kind of an Networking device, at least a thin client

    Read the article

  • Multiple users writing to one Samba mount point in OSX

    - by Sam
    I have an OSX box containing a script which writes a unique file to a Samba share. The first part of the script mounts the share. On the machine are 2 users- UserA and UserB. Each requires to run this script at any given time however only the user who mounted the share is able to write to it. I really need both users to have rwx access. Here is what I have tried: Mounting then chmod'ing the mountpoint (no effect- overruled by Samba server?) chmod'ing the mountpoint then mounting (same as above) sudo mount_smbfs Both users have admin privileges. Ideally a solution would be executable by one of the users (contained in the script) and not rely on mounting at machine boot time. Any ideas appreciated, thanks!

    Read the article

  • What the heck is a OPTIONS method in a IIS 7.5 web Log?

    - by Knox
    I know what a GET and a POST are, but it's almost impossible to Google for the word OPTIONS. Here's what I see (i deleted all the stuff at the end) of each: 11/23/11 0:02:13 10.100.0.14 GET /CUpdate2.cshtml _=1322006533495 11/23/11 0:02:13 10.200.0.10 OPTIONS /AssignmentCount _=1322006576798 11/23/11 0:02:13 10.200.0.10 GET /media/faxSound.wav - 11/23/11 0:02:13 10.200.0.10 GET /Star/StarUpdates _=1322006578729 11/23/11 0:02:13 10.100.0.10 GET /CUpdate2.cshtml _=1322006533268

    Read the article

  • RAID strategy - 8 1TB drives

    - by alex
    I'm setting up a backup storage device- This machine has Windows Server 2008, on a separate boot drive. It has 8x 1TB drives, and uses a hardware RAID card. My question is, which RAID configuration should I go for? Initially, I was going to go with RAID 5 across all 8 drives, however members on serverFault have advised against it. I was just wondering why? Some people have suggested 2 lots of RAID 5 configuration on 4 of the drives, then striping them... I want to maximise the storage space, as this is a backup unit - will store SQL backups, Acronis Images, files, etc... It won't be for public access, so the I/O won't be that high I wouldn't think.

    Read the article

  • how to warehouse data that is not needed from sql server

    - by I__
    I have been asked to truncate a large table in sql server 2008. The data is not needed but might be needed once every two years. It will NEVER have to be changed, only viewed. The question is, since I don't need the data on a day-to-day basis, what do I do with it to protect and back it up? Please keep in mind that I will need to have it accessible maybe once every two years, and it is FINE for us if the recovery process takes a few hours. The entire table is about 3 million rows and I need to truncate it to about 1 million rows.

    Read the article

  • Howto Nginx + git-http-backend + fcgiwrap (Debian Squeeze)

    - by brainsqueezer
    I am trying to setup git-http-backend with Nginx but after 24 hours wasting time and reading everything I could I think this config should work but doesn't. server { listen 80; server_name mydevserver; access_log /var/log/nginx/dev.access.log; error_log /var/log/nginx/dev.error.log; location / { root /var/repos; } location ~ /git(/.*) { gzip off; root /usr/lib/git-core; fastcgi_pass unix:/var/run/fcgiwrap.socket; include /etc/nginx/fastcgi_params2; fastcgi_param SCRIPT_FILENAME /usr/lib/git-core/git-http-backend; fastcgi_param DOCUMENT_ROOT /usr/lib/git-core/; fastcgi_param SCRIPT_NAME git-http-backend; fastcgi_param GIT_HTTP_EXPORT_ALL ""; fastcgi_param GIT_PROJECT_ROOT /var/repos; fastcgi_param PATH_INFO $1; #fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; } } Content of /etc/nginx/fastcgi_params2 fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param REMOTE_USER $remote_user; # required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; but config seems not working $ git clone http://mydevserver/git/myprojectname/ Cloning into myprojectname... warning: remote HEAD refers to nonexistent ref, unable to checkout. and I can request an unexistant project and I will get the same answer $ git clone http://mydevserver/git/thisprojectdoesntexist/ Cloning into thisprojectdoesntexist... warning: remote HEAD refers to nonexistent ref, unable to checkout. If I change root to /usr/lib I will get a 403 error and this will be reported to nginx error log: 2011/11/23 15:52:46 [error] 5224#0: *55 FastCGI sent in stderr: "Cannot get script name, is DOCUMENT_ROOT and SCRIPT_NAME set and is the script executable?" while reading response header from upstream, client: 198.168.0.4, server: mydevserver, request: "GET /git/myprojectname/info/refs HTTP/1.1", upstream: "fastcgi://unix:/var/run/fcgiwrap.socket:", host: "mydevserver" My main trouble is with the correct root value with this configuration. Maybe there are some permissions problems. Notes: /var/repos/ is owned by www-data and contains folders bit git bare repos. All this works perfectly using ssh. If I go with my browser to http://mydevserver/git/myproject/info/refs it is answered by git-http-backend asking me to send a command. /var/run/fcgiwrap.socket has 777 permissions.

    Read the article

  • Spring-mvc project can't select from a particular mysql table

    - by Dan Ray
    I'm building a Spring-mvc project (using JPA and Hibernate for DB access) that is running just great locally, on my dev box, with a local MySQL database. Now I'm trying to put a snapshot up on a staging server for my client to play with, and I'm having trouble. Tomcat (after some wrestling) deploys my war file without complaint, and I can get some response from the application over the browser. When I hit my main page, which is behind Spring Security authentication, it redirects me to the login page, which works perfectly. I have Security configured to query the database for user details, and that works fine. In fact, a change to a password in the database is reflected in the behavior of the login form, so I'm confident it IS reaching the database and querying the user table. Once authenticated, we go to the first "real" page of the app, and I get a "data access failure" error. The server's console log gets this line (redacted): ERROR org.hibernate.util.JDBCExceptionReporter - SELECT command denied to user 'myDbUser'@'localhost' for table 'asset' However, if I go to MySQL from the shell using exactly the same creds, I have no problem at all selecting from the asset table: [development@tomcat01stg]$ mysql -u myDbUser -pmyDbPwd dbName ... mysql> \s -------------- mysql Ver 14.12 Distrib 5.0.77, for redhat-linux-gnu (i686) using readline 5.1 Connection id: 199 Current database: dbName Current user: myDbUser@localhost ... UNIX socket: /var/lib/mysql/mysql.sock -------------- mysql> select count(*) from asset; +----------+ | count(*) | +----------+ | 19 | +----------+ 1 row in set (0.00 sec) I've broken down my MySQL access settings, cleaned out the user and re-run the grant commands, set up a version of the user from 'localhost' and another from '%', making sure to flush permissions.... Nothing is changing the behavior of this thing. What gives?

    Read the article

  • Nginx + uWSGI + Django performance stuck on 100rq/s

    - by dancio
    I have configured Nginx with uWSGI and Django on CentOS 6 x64 (3.06GHz i3 540, 4GB), which should easily handle 2500 rq/s but when I run ab test ( ab -n 1000 -c 100 ) performance stops at 92 - 100 rq/s. Nginx: user nginx; worker_processes 2; events { worker_connections 2048; use epoll; } uWSGI: Emperor /usr/sbin/uwsgi --master --no-orphans --pythonpath /var/python --emperor /var/python/*/uwsgi.ini [uwsgi] socket = 127.0.0.2:3031 master = true processes = 5 env = DJANGO_SETTINGS_MODULE=x.settings env = HTTPS=on module = django.core.handlers.wsgi:WSGIHandler() disable-logging = true catch-exceptions = false post-buffering = 8192 harakiri = 30 harakiri-verbose = true vacuum = true listen = 500 optimize = 2 sysclt changes: # Increase TCP max buffer size setable using setsockopt() net.ipv4.tcp_rmem = 4096 87380 8388608 net.ipv4.tcp_wmem = 4096 87380 8388608 net.core.rmem_max = 8388608 net.core.wmem_max = 8388608 net.core.netdev_max_backlog = 5000 net.ipv4.tcp_max_syn_backlog = 5000 net.ipv4.tcp_window_scaling = 1 net.core.somaxconn = 2048 # Avoid a smurf attack net.ipv4.icmp_echo_ignore_broadcasts = 1 # Optimization for port usefor LBs # Increase system file descriptor limit fs.file-max = 65535 I did sysctl -p to enable changes. Idle server info: top - 13:34:58 up 102 days, 18:35, 1 user, load average: 0.00, 0.00, 0.00 Tasks: 118 total, 1 running, 117 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3983068k total, 2125088k used, 1857980k free, 262528k buffers Swap: 2104504k total, 0k used, 2104504k free, 606996k cached free -m total used free shared buffers cached Mem: 3889 2075 1814 0 256 592 -/+ buffers/cache: 1226 2663 Swap: 2055 0 2055 **During the test:** top - 13:45:21 up 102 days, 18:46, 1 user, load average: 3.73, 1.51, 0.58 Tasks: 122 total, 8 running, 114 sleeping, 0 stopped, 0 zombie Cpu(s): 93.5%us, 5.2%sy, 0.0%ni, 0.2%id, 0.0%wa, 0.1%hi, 1.1%si, 0.0%st Mem: 3983068k total, 2127564k used, 1855504k free, 262580k buffers Swap: 2104504k total, 0k used, 2104504k free, 608760k cached free -m total used free shared buffers cached Mem: 3889 2125 1763 0 256 595 -/+ buffers/cache: 1274 2615 Swap: 2055 0 2055 iotop 30141 be/4 nginx 0.00 B/s 7.78 K/s 0.00 % 0.00 % nginx: wo~er process Where is the bottleneck ? Or what am I doing wrong ?

    Read the article

  • mget: filename.xlsx: file already existst and xfer:clobber is unset

    - by Chris
    I am getting this: mget: filename.xlsx: file already existst and xfer:clobber is unset error when I try to download the contents of my ftp server. Basically it is setup using cygwin. We recently upgraded the server where all of the data is downloaded to on a set schedule. The old server was Windows server 2003, and the new server is windows server 2008. I am having issues when I try to download a file that is already in the folder. The client never changes the file name, so when we go to download it from the server we get that error. Is there anything i can put in the batch files, or something for it to just force it to replace that file? Thanks in advance

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15  | Next Page >