Monthly Archives

Articles indexed in April 2012

Page 76/252 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • What are the implications of using static const instead of #define?

    - by Simon Elliott
    gcc complains about this: #include <stdio.h> static const int YY = 1024; extern int main(int argc, char*argv[]) { static char x[YY]; } $ gcc -c test1.c test1.c: In function main': test1.c:5: error: storage size of x' isn't constant test1.c:5: error: size of variable `x' is too large Remove the “static” from the definition of x and all is well. I'm not exactly clear what's going on here: surely YY is constant? I had always assumed that the "static const" approach was preferable to "#define". Is there any way of using "static const" in this situation?

    Read the article

  • Second Thread Holding Up Entire Program in C# Windows Form Application

    - by Brandon
    In my windows form application, I'm trying to test the user's ability to access a remote machine's shared folder. The way I'm doing this (and I'm sure that there are better ways...but I don't know of them) is to check for the existence of a specific directory on the remote machine (I'm doing this because of firewall/other security restrictions that I'm confronted with in my organization). If the user has rights to access the shared folder, then it returns in no time at all, but if they don't, it hangs forever. To solve this, I threw the check into another thread and wait only 1000 milliseconds before determining that the share can't be hit by the user. However, when I do this, it still hangs as if it was never run in the same thread. What is making it hang and how do I fix it? I would think that the fact that it is in a separate thread would allow me to just let the thread finish on it's own in the background. Here is my code: bool canHitInstallPath = false; Thread thread = new Thread(new ThreadStart(() => { canHitInstallPath = Directory.Exists(compInfo.InstallPath); })); thread.Start(); thread.Join(1000); if (canHitInstallPath == false) { throw new Exception("Cannot hit folder: " + compInfo.InstallPath); }

    Read the article

  • Submit information to url, but also open PDF

    - by Mad Ducky Digital Branding
    I have a client whose desire is to have her Wordpress blog show a MailChimp form on her home page as a gateway to a .pdf. I need the following behavior to occur when the user clicks "Submit": execute the included MailChimp's javascript file; this ensures the form was properly filled, and then performs the sign-up to the newsletter list (don't need help with this part) then show the user an informational PDF for download or viewing EDIT: The logical order was flipped from when I originally posted this. The script should execute, and only if the script gets executed properly should the PDF show to the user Note: My experience level with HTML and PHP is 3/4, and with JS I am 2/4 EDIT: (seems more like 1/4 at this point lol). If my research is correct, PHP (server-side language) would be used to do that which the client wants. Additional validation is not necessary beyond what MailChimp's script provides (it ensures that user has submitted a completed form) is not necessary in this case (the client says it's ok if the e-mail isn't valid at all). EDIT: Reworded this sentence from original post to be more clear The .pdf URL and content is static, and simply needs to be shown, not generated. ----RESEARCH---- I know that the Mailchimp form uses the following line to actually submit the information, but I want to do the action mentioned below, as well as open the aforementioned .pdf: <form action="http://*BLAH*.us2.list-manage.com/subscribe/post?u=*BLAHBLAH*&amp;id=*BLAHBLAHBLAH*" method="post" id="mc-embedded-subscribe-form" name="mc-embedded-subscribe-form" class="validate" target="_blank"> I am reading on other sites that I can conceivably point "action" to a .php file, but if there is a way to do this with javascript - since its using the .js file that I created for that already anyways, then I would be most happy. Barring that, I'll take what I can get.. ----SOLUTION?---- ...

    Read the article

  • VS 11 Beta merge tool is awesome, except for resovling conflicts

    - by deadlydog
    If you've downloaded the new VS 11 Beta and done any merging, then you've probably seen the new diff and merge tools built into VS 11.  They are awesome, and by far a vast improvement over the ones included in VS 2010.  There is one problem with the merge tool though, and in my opinion it is huge.Basically the problem with the new VS 11 Beta merge tool is that when you are resolving conflicts after performing a merge, you cannot tell what changes were made in each file where the code is conflicting.  Was the conflicting code added, deleted, or modified in the source and target branches?  I don't know (without explicitly opening up the history of both the source and target files), and the merge tool doesn't tell me.  In my opinion this is a huge fail on the part of the designers/developers of the merge tool, as it actually forces me to either spend an extra minute for every conflict to view the source and target file history, or to go back to use the merge tool in VS 2010 to properly assess which changes I should take.I submitted this as a bug to Microsoft, but they say that this is intentional by design. WHAT?! So they purposely crippled their tool in order to make it pretty and keep the look consistent with the new diff tool?  That's like purposely putting a little hole in the bottom of your cup for design reasons to make it look cool.  Sure, the cup looks cool, but I'm not going to use it if it leaks all over the place and doesn't do the job that it is intended for. Bah! but I digress.Because this bug is apparently a feature, they asked me to open up a "feature request" to have the problem fixed. Please go vote up both my bug submission and the feature request so that this tool will actually be useful by the time the final VS 11 product is released.

    Read the article

  • Database Development and Source Control

    - by Enrique Lima
    I have been working with Database Development and the aspects that come with it, the pain and the joy of moving from Dev to QA and then on to Production.  Source Control has a place in Dev, and that is where the baselines should be established. Where am I going with this? I have been working with Redgate’s Source Control 3.0, and I am seeing some features that are great for the process of moving from Dev to … well something that allows for quite a level of control.  We are not only talking about scripting the structure of a database, but creating a baseline, working with migration scripts, and integrated with Redgate’s Schema Compare.  There is a detailed paper that will be posted here in the next day or so to provide step by step information of the process to define your baseline in Dev and then take it to the desired destination. In the meantime, check the Webinars Redgate has regarding this process and products.

    Read the article

  • Microsoft Press Deal of the Day - 5/April/2012 - Windows® Internals, Part 1, Sixth Edition

    - by TATWORTH
    Today's Deal of the day from Microsoft Press at http://shop.oreilly.com/product/0790145305930.do is Windows® Internals, Part 1, Sixth Edition."Delve inside Windows architecture and internals—guided by a team of internationally renowned internals experts. Fully updated for Windows 7 and Windows Server 2008 R2, this classic guide delivers key architectural insights on system design, debugging, performance, and support—along with hands-on experiments to experience Windows internal behavior firsthand."

    Read the article

  • How do I format this regex so it will work in fail2ban?

    - by chapkom
    I've just installed fail2ban on my CentOS server in response to an SSH brute force attempt. The default regular expressions in fail2ban's sshd.conf file do not match any entries in audit.log, which is where SSH seems to be logging all connection attempts, so I am trying to add an expression that will match. The string I am trying to match is as follows: type=USER_LOGIN msg=audit(1333630430.185:503332): user pid=30230 uid=0 auid=500 subj=user_u:system_r:unconfined_t:s0-s0:c0.c1023 msg='acct="root": exe="/usr /sbin/sshd" (hostname=?, addr=<HOST IP>, terminal=sshd res=failed)' The regular expression I am attempting to use is: ^.*addr=<HOST>, terminal=sshd res=failed.*$ I've used regextester.com and regexr to try to build the regex. The testers give me a match for this regex:^.*addr=\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}, terminal=sshd res=failed.*$ but fail2ban-regex complains if I don't use the <HOST> tag in the regex. However, using ^.*addr=<HOST>, terminal=sshd res=failed.*$ gives me 0 matches. At this point, I am totally stuck and I would greatly appreciate any assistance. What am I doing wrong in the regex I am trying to use?

    Read the article

  • Find routers IP address on the other side

    - by corsiKa
    Here's the basic setup of my network In this diagram: 1: The internet c: cable 2: Wireless router w: wireless connection 3: A win7 box with internet connection sharing enabled 4: A wireless router, but I'm only using its LAN capabilities to connect box 5 to the internet. 5: A win7 box, the computer I'm using to make this post. So its internet works just fine. Now if I'm on box 5, and I ping 192.168.1.1, I hit 4. If I'm on box 3 and I ping 192.168.1.1, I hit 2. Now obviously box 3 does not think 4's IP address is 192.168.1.1, or I wouldn't be able to connect to the internet. Okay, now that you know as much as I do about my network, here's my question: If I was on box 3, how would I determine the IP address of 4? Basically I'm running a webserver on box 5 and want to access this webserver on box 3 and other boxes. So that's the end goal. If there's other information there that can help, I'd appreciate it. Thanks!

    Read the article

  • Dual DC Time Service

    - by poconnor
    I believe I'm having an issue with my Domain Controllers and Time Server. On my back up DC, I keep seeing a warning stating "The time service has stopped advertising as a time source because the local clock is not synchronized." Does this mean that my backup DC believes it's a Time Server? My PDC should be the time server and I have gone through setting up the PDC as the time server. I was not around for the original setup of the time server with the old PDC and Backup DC. But I believe the old PDC was the time server so I setup the new PDC as the new time server, when I decommissioned the old PDC. Is it possible that the Backup DC was setup as the time server and it still thinks it's suppose to be giving out time to everyone? Registry for PDC has NTP Registry for Backup has NT5D5 Results of w32tm /monitor Getting AD DC list for default domain... Analyzing:delayoffset from DC1.local..com Stratum: 4 delayoffset from DC1.local..com Stratum: 3 Warning: Reverse name resolution is best effort. It may not be correct since RefID field in time packets differs across NTP implementations and may not be using IP addresses. DC2.local..com[192.168.1.8:123]: ICMP: 1ms NTP: -0.6349491s RefID: DC1.local..com [192.168.1.9] DC1.local..com *** PDC ***[192.168.1.9:123]: ICMP: 0ms NTP: +0.0000000s RefID: wwwco1test12.microsoft.com [65.55.21.20]

    Read the article

  • Group policy not applying to security group

    - by ihavenoideawhatimdoing
    Preface: I have enough privileges to create GPOs in my OU, and have made a few of them for some simple tasks (like deploying a printer to certain users). Not actually a sysadmin...I'm a developer who is winging it. I wanted to create a GPO that would set a mapped folder for a certain security group (which I recently created and that contains only myself). Did the following: Created the GPO in MyOU - Users Removed the default Authenticted Users under Security Filtering Add the security group with my account to Security Filtering Set up the mapping via the User Configuration option Changed GPO Status to "Computer configuration settings disabled" Left WMI filtering to Closed the GPO at this point... Logged in as the target user; ran gpupdate /force Logged out, logged in, ran gpresult /r, no mention of my GPO Rebooted Logged in, re-ran gpupdate /force Logged out, logged in, ran gpresult /r, still no mention of my GPO If I log in with another completely different user, their RSOP information shows that the new GPO is being ignored due to a security restriction, so it appears to be "working" for other users. I just can't get it to actually show up in RSOP for the user it should be working. Is there anything else I can do short of rebooting endlessly and crossing my fingers?

    Read the article

  • EC2 instance is blocking all outbound connections, how to diagnose/fix?

    - by Fraggle
    My EC2 instance is blocking all outbound connections. wget http://www.google.com ==> Hangs ping google.com ==>hangs ssh user@anyserver ==>hangs I ran : sudo iptables -F to eliminate all rules to no avail AWS Management console shows Security Group for that instance has Inbound rule allowing ssh and port 80. Can't find anything about Outbound rules there. Rebooted instance, no change. If anyone knows how to diagnose or fix, please help. Adding info: [ec2-user@ip-10-112-62-73 ~]$ ifconfig eth0 Link encap:Ethernet HWaddr 12:31:3D:06:31:BB inet addr:10.112.62.73 Bcast:10.112.63.255 Mask:255.255.254.0 inet6 addr: fe80::1031:3dff:fe06:31bb/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1933 errors:0 dropped:0 overruns:0 frame:0 TX packets:1764 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:164075 (160.2 KiB) TX bytes:343256 (335.2 KiB) Interrupt:9 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:672 (672.0 b) TX bytes:672 (672.0 b) [ec2-user@ip-10-112-62-73 ~]$ ip route show 10.112.62.0/23 dev eth0 proto kernel scope link src 10.112.62.73 default via 10.112.62.1 dev eth0

    Read the article

  • Setting up a very mixed Active Directory network to work with PowerShell Remote Administration

    - by erictheavg
    Summary: I want to be able to monitor the computers on my network, but don't need it to be automated. We're too small to purchase anything like MOM, but too big to do anything manually (~100 machines in two locations). I just keep running into issues, and was wondering if there's a master list of Group Policy settings I can distribute to my environment to get Remote Powershell working. Environment: Our AD network is pretty mixed. The end users have XP SP3, Win 7, and Win 7 x64. The servers include Win2k3 SP2, Win2k8, Win2k8 x64, Win2k8 R2, and Win2k8 R2 x64. Details: I'm trying to get it to work with Remote Powershell, but I run into errors like the following: Connecting to remote server failed with the following error message : The WinRM client cannot process the request. Default authentication may be used with an IP address under the following conditions: the transport is HTTPS or the destination is in the TrustedHosts list, and explicit credentials are provided. Use winrm.cmd to configure TrustedHosts. Note that computers in the TrustedHosts list might not be authenticated. For more information on how to set TrustedHosts run the following command: winrm help config. For more information, see the about_Remote_Troubleshooting Help topic. + CategoryInfo : OpenError: (:) [], PSRemotingTransportException + FullyQualifiedErrorId : PSSessionStateBroken Then I go to the computer (Win2k3 SP2 server) and run winrm quickconfig per the recommendations via google, and it says: Make these changes [y/n]? y WinRM has been updated to receive requests. WinRM service started. WSManFault Message = The client cannot connect to the destination specified in the request. Verify that the service on the destination is running and is accepting requests. Consult the logs and documentation for the WS-Management service running on the destination, most commonly IIS or WinRM. If the destination is the WinRM service, run the following command on the destination to analyze and configure the WinRM service: "winrm quickconfig". Error number: -2144108526 0x80338012 The client cannot connect to the destination specified in the request. Verify that the service on the destination is running and is accepting requests. Consult the logs and documentation for the WS-Management service running on the destination, most commonly IIS or WinRM. If the destination is the WinRM service, run the following command on the destination to analyze and configure the WinRM service: "winrm quickconfig". That's right. It tells me to remedy my winrm quickconfig failure by running winrm quickconfig. I don't want to band-aid this project one google search at a time. I'm sure there is a step-by-step tutorial out there on how to set up a network for powershell remote administration. Does anyone know of one? Books are acceptable. Thanks in advance! I didn't think my question would get this long.

    Read the article

  • Read/Select:Connection reset by peer (10054) during remote logoff in RealVNC

    - by Santhosh
    I am using RealVNC 4.5 Enterprise on my Windows 7 client. I use this to connect to a remote system which is also running RealVNC 4.5 (on Windows 7 again). And then i logoff the remote system (server) and all of a sudden the realvnc viewer on the client closes with the message : "Read/Select:Connection reset by peer (10054)" It also asks if i want to reconnect again. Any idea why this is happening and how do i resolve this?

    Read the article

  • CentOS install proftpd with yum

    - by ServerBloke
    Server: CentOS 6.2 64bit How can I install Proftpd using yum? A search for the package doesn't find it: yum list proftpd Error: No matching Packages to list Although my CentOS 6 VPS does find it, but this server doesn't. I have read I need to install an rpm of some kind. How would I do that and where is the reliable place to get it (64bit)? I have done Proftpd installs by compiling the source in the past but would prefer to use yum this time.

    Read the article

  • redirect all youtube video requests to a specific one

    - by iTayb
    I'm on an IT team in my company and I would like to block youtube to users. I don't want to just deny access to the whole youtube domain, but only to replace the .flv/.mp4 request with the one that I want. That way, if someone tries to watch youtube videos on the network, He'll get a video of why using our expensive bandwidth for pleasure is a no-no. I thought about using a packet manipulation program and just replace the video ID with something that I want, but I didn't manage to do it right.

    Read the article

  • bash : "set -e" and check if a user exists make script exit

    - by Dahmad Boutfounast
    i am writing a script to install a program with bash, i want to exit on error so i added "set -e" in the beginning of my script. the problem is that i have to check if a user exists inside of my script, to do this i am using "grep "^${USER}:" /etc/passwd", if the user exists, the script runs normally, but if the user doesn't exist, this command exists, and i don't want to exit on this case (i have to create the user and continue my installation). so what's the solution to make my script continue running ?? i tried to redirect the output of "grep" to a variable, but i still have the same problem :( thanks.

    Read the article

  • Strange Exchange 2010 Mailbox issues

    - by jmreicha
    A little bit of information on my environment. I have 2 Exchange 2010 mailbox servers setup in a DAG, connected to a VNX SAN via fiber for the mailbox database disks. One of the servers seems to be working fine as it isn't throwing any similar errors but the other has been receiving strange errors in Event Viewer. I haven't been able to find a pattern to when these happen and I'm not really sure what I should be looking at to troubleshoot this. At this point I am wondering if the issue resides on the SAN or if this is because of an improper shutdown or something else on the Exchange side of things? Here are a few of the errors if that helps:

    Read the article

  • Lync CMS replication is failing for all Domain Computers

    - by Ravi Kanneganti
    I have Lync Server 2010 and Active Directory installed on 2 different Windows Server 2008 R2 machines. I have added a Windows 7 PC to AD. And I have added this computer to Trusted Application Servers Pool and published the topology. I want to build an UCMA application to extend Lync Server functionality. I have installed UCMA 3.0 SDK in the same computer where Lync Server is residing. But, CMS Replication isn't happening and "Get-CsManagementStoreReplicationStatus" always gives Uptodate as "False" for my Windows 7 PC. I have even tried "Invoke-CSManagementStoreReplication" but nothing changed. Also, this is the error message that I can see in the log file: TL_WARN(TF_COMPONENT) [2]0500.07B8::04/05/2012-14:55:07.296.00000f85 (XDS_Replica_Replicator,FileDistributeTask.Execute:filedistributetask.cs(165))(000000000043B3FA)**Could not distribute the file. Exception: [System.IO.IOException: The process cannot access the file because it is being used by another process.** at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.IO.File.Move(String sourceFileName, String destFileName) at Microsoft.Rtc.Xds.Replication.Replicator.Common.FileDistributeTask.Execute()]. TL_NOISE(TF_DIAG) [2]0500.07B8::04/05/2012-14:55:07.296.00000f86 (XDS_Replica_Replicator,ReplicaTaskContainer<T>.OnError:replicataskcontainer.cs(166))(00000000005C39D4)Enter. TL_INFO(TF_COMPONENT) [2]0500.07B8::04/05/2012-14:55:07.296.00000f87 (XDS_Replica_Replicator,ReplicaTaskContainer<T>.OnError:replicataskcontainer.cs(171))(00000000005C39D4)Task error callback is about to be called. TL_VERBOSE(TF_DIAG) [2]0500.07B8::04/05/2012-14:55:07.296.00000f88 (XDS_Replica_Replicator,PerReplicaTaskManager<T>.HandleTaskError:perreplicataskmanager.cs(230))(000000000385E79C)Enter. TL_INFO(TF_COMPONENT) [2]0500.07B8::04/05/2012-14:55:07.296.00000f89 (XDS_Replica_Replicator,PerReplicaTaskManager<T>.HandleTaskError:perreplicataskmanager.cs(234))(000000000385E79C)Task encountered an error: [ReplicaTaskContainer<FileDistributeTask>{FileDistributeTask{E:\RtcReplicaRoot\xds-replica\from-master\data.zip, E:\RtcReplicaRoot\xds-replica\working\replication\from-master\data.zip, **Access failed**. (E:\RtcReplicaRoot\xds-replica\from-master\data.zip)}, FileDistributeTask{E:\RtcReplicaRoot\xds-replica\from-master\data.zip, E:\RtcReplicaRoot\xds-replica\working\replication\from-master\data.zip, }}]

    Read the article

  • WebSVN accept untrusted HTTPS certificate

    - by Laurent
    I am using websvn with a remote repository. This repository uses https protocol. After having configured websvn I get on the websvn webpage: svn --non-interactive --config-dir /tmp list --xml --username '***' --password '***' 'https://scm.gforge.....' OPTIONS of 'https://scm.gforge.....': Server certificate verification failed: issuer is not trusted I don't know how to indicate to websvn to execute svn command in order to accept and to store the certificate. Does someone knows how to do it? UPDATE: It works! In order to have something which is well organized I have updated the WebSVN config file to relocate the subversion config directory to /etc/subversion which is the default path for debian: $config->setSvnConfigDir('/etc/subversion'); In /etc/subversion/servers I have created a group and associated the certificate to trust: [groups] my_repo = my.repo.url.to.trust [global] ssl-trust-default-ca = true store-plaintext-passwords = no [my_repo] ssl-authority-files = /etc/apache2/ssl/my.repo.url.to.trust.crt

    Read the article

  • git on HTTP with gitolite and nginx

    - by Arnaud
    I am trying to setup a server where my git repo would be accessible with HTTP(S). I am using gitolite and nginx (and gitlab for web interface but I doubt it makes any difference). I have searched the whole afternoon and I think I'm stuck. I have think I have understood that nginx needs fcgiwrap to work with gitolite, so I tried several configurations, but none of them work. My repositories are at /home/git/repositories. Here's the three nginx configurations I have tried. 1: location ~ /git(/.*) { gzip off; root /usr/lib/git-core; fastcgi_pass unix:/var/run/fcgiwrap.socket; include /etc/nginx/fcgiwrap.conf; fastcgi_param SCRIPT_FILENAME /usr/lib/git-core/git-http-backend; fastcgi_param DOCUMENT_ROOT /usr/lib/git-core/; fastcgi_param SCRIPT_NAME git-http-backend; fastcgi_param GIT_HTTP_EXPORT_ALL ""; fastcgi_param GIT_PROJECT_ROOT /home/git/repositories; fastcgi_param PATH_INFO $1; #fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; } Result: > git clone http://myservername/projectname.git test/ Cloning into test... fatal: http://myservername/projectname.git/info/refs not found: did you run git update-server-info on the server? and > git clone http://myservername/git/projectname.git test/ Cloning into test... error: The requested URL returned error: 502 while accessing http://myservername/git/projectname.git/info/refs fatal: HTTP request failed 2: location ~ /git(/.*) { fastcgi_pass localhost:9001; include /etc/nginx/fcgiwrap.conf; fastcgi_param SCRIPT_FILENAME /usr/lib/git-core/git-http-backend; fastcgi_param GIT_HTTP_EXPORT_ALL ""; fastcgi_param GIT_PROJECT_ROOT /home/git/repositories; fastcgi_param PATH_INFO $1; } Result: > git clone http://myservername/projectname.git test/ Cloning into test... fatal: http://myservername/projectname.git/info/refs not found: did you run git update-server-info on the server? and > git clone http://myservername/git/projectname.git test/ Cloning into test... error: The requested URL returned error: 502 while accessing http://myservername/git/projectname.git/info/refs fatal: HTTP request failed 3: location ~ ^.*\.git/objects/([0-9a-f]+/[0-9a-f]+|pack/pack-[0-9a-f]+.(pack|idx))$ { root /home/git/repositories/; } location ~ ^.*\.git/(HEAD|info/refs|objects/info/.*|git-(upload|receive)-pack)$ { root /home/git/repositories; fastcgi_pass unix:/var/run/fcgiwrap.socket; fastcgi_param SCRIPT_FILENAME /usr/lib/git-core/git-http-backend; fastcgi_param PATH_INFO $uri; fastcgi_param GIT_PROJECT_ROOT /home/git/repositories; include /etc/nginx/fcgiwrap.conf; } Result: > git clone http://myservername/projectname.git test/ Cloning into test... error: The requested URL returned error: 502 while accessing http://myservername/projectname.git/info/refs fatal: HTTP request failed and > git clone http://myservername/git/projectname.git test/ Cloning into test... error: The requested URL returned error: 502 while accessing http://myservername/git/projectname.git/info/refs fatal: HTTP request failed Also note that with any of those configurations, when I try to clone with a project name that actually doesn't exist, I get a 502 error. Does anyone already succeeded in doing this? What am I doing wrong? Thanks. UPDATE: nginx error log file said: 2012/04/05 17:34:50 [crit] 21335#0: *50 connect() to unix:/var/run/fcgiwrap.socket failed (13: Permission denied) while connecting to upstream, client: 192.168.12.201, server: myservername, request: "GET /git/oct_editor.git/info/refs HTTP/1.1", upstream: "fastcgi://unix:/var/run/fcgiwrap.socket:", host: "myservername" So I changed permissions for /var/run/fcgiwrap.socket, and now I have : > git clone http://myservername/git/projectname.git test/ Cloning into test... error: The requested URL returned error: 403 while accessing http://myservername/git/projectname.git/info/refs fatal: HTTP request failed Here is the error.log file I have now: 2012/04/05 17:36:52 [error] 21335#0: *78 FastCGI sent in stderr: "Cannot chdir to script directory (/usr/lib/git-core/git/projectname.git/info)" while reading response header from upstream, client: 192.168.12.201, server: myservername, request: "GET /git/projectname.git/info/refs HTTP/1.1", upstream: "fastcgi://unix:/var/run/fcgiwrap.socket:", host: "myservername" I keep on investigating.

    Read the article

  • hung up troubleshooting packet discards

    - by Chris Satola
    I realize my question is generic, but hopefully someone may have some guidance for me. My network consists of Cisco switches. I am seeing a significant amount (upwards of millions of packets per day) transmit drops between two switches. One being a 3750 and the other a 3560. The peak throughput of this link is only upper 400Mbps, so it shouldn't be a bandwidth issue. At this point, I am sort of clueless where to look or what tools I can use to determine what packets are dropping and why. I can setup a SPAN port on that link and wireshark it, but I don't know if that could tell me anything. Does anyone have any suggestions? Thanks in advance.

    Read the article

  • What options to use for Accurate bacula backup?

    - by Kiss Stefan
    It's actually 2 question in one. First is a bit more theoretically. So when specifying accurate options how does bacula figure out if a file needs to be backed up ? it's a simple AND ? As in if the options are Accurate = sm5 bacula will not backup the file if ((size = old size) AND (modtime = old modtime) AND (md5 = old md5)) Is that correct ? Do any of the options take precedence ? as in would be a file skipped if modif time is diffreent but it has the same md5sum ? Are there any implied options that you cannot ignore ? Practical case, ( bacula 5.0.1 ) I have to back-up a svn repo, in order to be able to make incremental backups as simple as posible i am hotcopying (client run before) it to another location, that bacula will backup ( then delete it with client run after). Now in the fileset i have Accurate = spnd5 This should tell bacula to take into consideration size , permission bits number of links , decreases in size and md5sum. However , an incremental is also including a full copy of the svn. What am i doing wrong ? it seems that it takes into account creation time even tho i have not specified it.

    Read the article

  • How do I install something from source and make it available to root?

    - by pwny
    I have a CentOS VM and I need to install the latest version of Ruby on it. Unfortunately, yum only makes Ruby 1.8.6 available so I'm trying to install Ruby from source. Here's what I'm using: cd /usr/src sudo -s wget http://ftp.ruby-lang.org/pub/ruby/1.9/ruby-1.9.3-p125.tar.gz tar -xvzf ruby-1.9.3-p125.tar.gz cd ruby-1.9.3-p125 ./configure make && make install The problem is that once that's done, I can only use Ruby as a regular user but I need to use it as root to install some gems. For example, as a regular user I can do ruby -v and it works but sudo ruby -v outputs bash: ruby: command not found. What am I missing to make stuff I install from source available to all users?

    Read the article

  • JBOSS App Server vs. Glassfish

    - by codingbear
    I'm quite newbie on Glassfish. What are some differences between JBoss and Glassfish? Why would you chooose one over the other. I'm trying to read up on Glassfish, but it is really hard to pinpoint things that I may need before I start installing and trying to deploy some applications on it. UPDATE It would be good if any additional information on JBoss and Glassfish comparison is provided (e.g. technologies they support, performance, etc.)

    Read the article

  • What is the best way to migrate to a new file share in a new domain?

    - by cbattlegear
    I am looking to move a file share (100 GB or so) from one domain/server to a new domain and server. I would like to do this with little to no downtime and if possible I would like to be able to map permissions from the groups/users in the current system to groups/users in the new domain. A side question, a large number of the files in the system are office documents with hard links to the old file server. Any way to programmatically change all those links to the new file server?

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >