Daily Archives

Articles indexed Monday June 25 2012

Page 8/16 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Why is the 'if' statement considered evil?

    - by Vadim
    I just came from Simple Design and Testing Conference. In one of the session we were talking about evil keywords in programming languages. Corey Haines, who proposed the subject, was convinced that if statement is absolute evil. His alternative was to create functions with predicates. Can you please explain to me why if is evil. I understand that you can write very ugly code abusing if. But I don't believe that it's that bad.

    Read the article

  • Last week for early bird discounts to St. Louis Days of .NET 2012

    - by Arkham
    This is the last week to get the early bird $75 discount for St. Louis Days of .NET 2012 on Aug 2-4!! This year’s conference will have: A Microsoft keynote speaker discussing web technology and trends. Great sessions by great speakers! Over half of the sessions to be presented on Aug 3rd and 4th have been posted to the site and you can expect another 30 sessions to be posted this week. Although the MVC session has a waitlist, the other pre-compiler workshops on Aug 2nd still have spots available. Network with your peers at our Thursday and Friday evening social events. There will be food, drink, music, gaming, magic, and more! Open space sessions and a Lab in the Lounge where you can see what some of your peers are building and discuss in depth. While there is still room now, this year’s attendance will be capped at 900, so don’t hesitate! And remember, groups of 10 or more get an additional $25 off the ticket price.

    Read the article

  • Creating a Strong Bridge to the Post PC World

    - by Webgui
    Moving from location to location requires strong roads.  When crossing a barrier though, like a body of water or valley, we are required to build a strong bridge to get us from point A to point B in a way that is fast, safe, and easy.Yet we are not talking here about driving a car or riding a bus.  As we in the computing world are evidencing the move to the post-PC era, modernizing and migrating legacy applications to harness the power of HTML5 web, cloud and mobile is one of the most difficult challenges enterprises have faced.  Constant technological changes have weakened the business value of legacy systems, which have been developed over the years through huge investments.  There are several risks of course in this move.  Do you choose to simply rewrite code of legacy apps and transform them to HTML5 one by one?  This is quite expensive (according to research firm Gartner, the cost is $6 - $26 per line of code).  Of course, the pace of the rewriting process is very slow – around 170 lines per day for each developer – which slows down business productivity in a world in which no organization can afford to fall behind.  Other questions include whether the new cloud-based apps will have the same functionality as the trusted applications that worked for you for years.  How will the user experience be affected?  And of course, what about data security?  So we are faced with the challenge of building a sturdy bridge to stabilize our move in order to allow us to confidently and easily move our legacy applications into the post-PC era.   We at Gizmox are excited to release the first downloadable Community Technology Preview (CTP) of our Instant CloudMove Transposition Studio.Developers: To download the tool, and try it out for yourself, please visit http://www.visualwebgui.com/download.aspx.The CTP is the first and only tool-based solution allowing any Microsoft Visual Studio developer to extend VB6 and .NET enterprise client/server applications into HTML5 web, cloud and mobile applications, including the ability to upgrade their code and UI while doing so.   It is the only solution to fully replicate enterprise desktop applications behavior in the post-PC era.  With Instant CloudMove, the transposed application is available on any mobile or tablet device, browser and across any client operating system. Moreover, the extended application logic and data remains on the server behind the fire-wall and therefore the application’s front end is secured-by-design.   We would love for you to try out the tool for yourselves and let us know what you think.  How are you finding the move?

    Read the article

  • SonicPoint AP Clients Not Able To Connect With DHCP

    - by Mike Keller
    This is my first time setting up anything like this so please be gentle. I'm a web developer who fell into setting up a few SonicWall NSA 4200's... I've tried doing as much research on this through Google and ServerFault but haven't been able to hunt down an answer as to what I'm doing wrong. We've got two virtual access points set up here, one that is intended for employees (tied to X2) and the other for guests (tied to X2:V1). We are not using the DHCP server on the NSA 4200, but one already on the network. When a client connects to the employee SSID they are able to obtain a IP from the network's DHCP server. However when attempting to connect to the guest SSID the client does a search for a DHCP server but can't find one. Any clues, resources, answers would be appreciated.

    Read the article

  • Authenticating Linked Servers - SQL Server 8 to SQL Server 10

    - by jp2code
    We have an old SQL Server 2000 database that has to be kept because it is needed on our manufacturing machines. It also maintains our employee records, since they are needed on these machines for employee logins. We also have a newer SQL Server 10 database (I think this is 2008, but I'm not sure) that we are using for newer development. I have recently learned (i.e. today) that I can link the two servers. This would allow me to access the employee tables in the newer server. Following the SF post SQL Server to SQL Server Linked Server Setup, I tried adding the link. In our SQL Server 2000 machine, I got this error: Similarly, on our SQL Server 10 machine, I got this error: The messages, though worded different, probably say the same thing: I need to authenticate, somehow. We have an Active Directory, but it is on yet another server. What, exactly, should be done here? A guy HERE<< said to check the Security settings, but did not say what else to do. Both servers are set to SQL Server and Windows Authentication mode. Now what?

    Read the article

  • Folder default ACLs not inherited when new file is created

    - by Flavien
    I'm a bit of a beginner with Unix systems, but I'm running Cygwin on my Windows Server, and I am trying to figure out something related to extended ACLs. I have a directory to which I set the following ACLs: Administrator@MyServer ~ $ setfacl -m d:u:Someuser:r-- somedir Administrator@MyServer ~ $ getfacl somedir/ # file: somedir/ # owner: Administrator # group: None user::rwx group::r-x mask:rwx other:r-x default:user::rwx default:user:Someuser:r-- default:group::r-x default:mask:rwx default:other:r-x As you can see mose of the default ACLs have the x bit. Then when I create a fine in it, it doesn't inherit the ACLs it is supposed to: Administrator@MyServer ~ $ touch somedir/somefile Administrator@MyServer ~ $ getfacl somedir/somefile # file: somedir/somefile # owner: Administrator # group: None user::rw- user:Someuser:r-- group::r-- mask:rwx other:r-- It's basically missing the x bit everywhere. Any idea why?

    Read the article

  • Which JMX statistics to watch out for in Catalina/Tomcat?

    - by geoaxis
    I have configured OpenNMS to collect all kinds of numeric data coming out of tomcat7 jmx. There are a lot of things. I am interested in monitoring this tomcat instance so that I can avoid down time and lockups. What metrics should I be watching out for? I am already monitoring things like CPU, Memory, Network via SNMP. With this JMX connection the things that I find interesting are Catalina:type=GlobalRequestProcessor,name="ajp-bio-/a.b.c.d-XXXX" RequestsCount so far. Catalina:type=Manager,context=/myApp,host=localhost Active sessions and its maximum so far

    Read the article

  • Problems using "at" with Apache

    - by Alex Padgett
    I'm trying to use a PHP script to create at jobs, but when it comes time to execute the jobs, nothing seems to be happening. I've tried to output any errors to log files, but have had no luck. It seems obvious that it's a permissions issue, because when I set apache to run as my personal user, everything works fine. However, when I exec wget directly from PHP, everything works fine so it seems that apache has the correct permissions to use it. The problem appears to be when using at in conjunction with apache. So I need to find a way to make this work with apache running as its own user. Here is the command I'm using: echo "wget -qO- http://example.com/" | at now + 1 minute 2>&1 Any ideas? EDIT: Apache can create the at jobs, it just seems that when they execute nothing is happening.

    Read the article

  • Have apache choose a php version based on the extension in the url, but with a single file on the filesystem

    - by Somejan
    I want to configure a local apache server to serve php files with different php versions. In my document root I have phpinfo.php, now if I go to http://localhost/phpinfo.php4, I want to see the phpinfo.php file processed with php4, if I go to http://localhost/phpinfo.php5 I want to see the same file processed with php5. Note: both php 4 and 5 are already installed side by side, I have no problem configuring apache to treat files that have a .php4 or .php5 extension on the filesystem with the correct php version. What I want is for apache to do the following: If the url-path ends in .php5, serve the file which has a .php extension on the filesystem using the application/x-httpd-php5 handler. If the url-path ends in .php4, serve the same file with the .php extension on the filesystem using the application/x-httpd-php4 handler.

    Read the article

  • How to partition my two hard drives

    - by Thoma Bigueres
    I've got a computer running under the OS "Window Server 2008 R2" on which i have : 60GB disk C: NTFS (Disk 0) 40GB unallocated memory (Disk 1) I would like to partition my disk so that i'll have : 30GB disk C: 70GB disk D: Can you help me on the step i should do to be abble to have this configuration ? I saw that first of all i should merge the two volumes into one, but when i click right on the c: Volume, i can't click on the "Extend Volume" link. Do you know how i can overcome this ? Thanks a lot

    Read the article

  • Skype performance in IPSEC VPN

    - by dunxd
    I've been challenged to "improve Skype performance" for calls within my organisation. Having read the Skype IT Administrators Guide I am wondering whether we might have a performance issue where the Skype Clients in a call are all on our WAN. The call is initiated by a Skype Client at our head office, and terminated on a Skype Client in a remote office connected via IPSEC VPN. Where this happens, I assume the trafficfrom Client A (encrypted by Skype) goes to our ASA 5510, where it is furtehr encrypted, sent to the remote ASA 5505 decrypted, then passed to Client B which decrypts the Skype encryption. Would the call quality benefit if the traffic didn't go over the VPN, but instead only relied on Skype's encryption? I imagine I could achieve this by setting up a SOCKS5 proxy in our HQ DMZ for Skype traffic. Then the traffic goes from Client A to Proxy, over the Skype relay network, then arrives at Cisco ASA 5505 as any other internet traffic, and then to Client B. Is there likely to be any performance benefit in doing this? If so, is there a way to do it that doesn't require a proxy? Has anyone else tackled this?

    Read the article

  • Group Policy for Setting Passwords: Server 2003 Domain

    - by user1236435
    In my 2003 domain, I am being requested to set a password policy to require passwords to expire every 4 months, and also require users to change their password on their next login, due to a security issue. In my domain, my OU's are setup by location, then drilled down to city, then the users and computers are in separate sub-domains. My question is, how do I set this up for my domain? Will I need to set the policy up for loop back? Can I configure this for just a specific OU? Any suggestions on how to move forward? Any advise is much appreciated, and thanks in advance!

    Read the article

  • Why do ICMP Redirct Host happen?

    - by El Barto
    I'm setting up a Debian box as a router for 4 subnets. For that I have defined 4 virtual interfaces on the NIC where the LAN is connected (eth1). eth1 Link encap:Ethernet HWaddr 94:0c:6d:82:0d:98 inet addr:10.1.1.1 Bcast:10.1.1.255 Mask:255.255.255.0 inet6 addr: fe80::960c:6dff:fe82:d98/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:6026521 errors:0 dropped:0 overruns:0 frame:0 TX packets:35331299 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:673201397 (642.0 MiB) TX bytes:177276932 (169.0 MiB) Interrupt:19 Base address:0x6000 eth1:0 Link encap:Ethernet HWaddr 94:0c:6d:82:0d:98 inet addr:10.1.2.1 Bcast:10.1.2.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:19 Base address:0x6000 eth1:1 Link encap:Ethernet HWaddr 94:0c:6d:82:0d:98 inet addr:10.1.3.1 Bcast:10.1.3.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:19 Base address:0x6000 eth1:2 Link encap:Ethernet HWaddr 94:0c:6d:82:0d:98 inet addr:10.1.4.1 Bcast:10.1.4.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:19 Base address:0x6000 eth2 Link encap:Ethernet HWaddr 6c:f0:49:a4:47:38 inet addr:192.168.1.10 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::6ef0:49ff:fea4:4738/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:199809345 errors:0 dropped:0 overruns:0 frame:0 TX packets:158362936 errors:0 dropped:0 overruns:0 carrier:1 collisions:0 txqueuelen:1000 RX bytes:3656983762 (3.4 GiB) TX bytes:1715848473 (1.5 GiB) Interrupt:27 eth3 Link encap:Ethernet HWaddr 94:0c:6d:82:c8:72 inet addr:192.168.2.5 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::960c:6dff:fe82:c872/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:110814 errors:0 dropped:0 overruns:0 frame:0 TX packets:73386 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:16044901 (15.3 MiB) TX bytes:42125647 (40.1 MiB) Interrupt:20 Base address:0x2000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:22351 errors:0 dropped:0 overruns:0 frame:0 TX packets:22351 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2625143 (2.5 MiB) TX bytes:2625143 (2.5 MiB) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:41358924 errors:0 dropped:0 overruns:0 frame:0 TX packets:23116350 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:3065505744 (2.8 GiB) TX bytes:1324358330 (1.2 GiB) I have two other computers connected to this network. One has IP 10.1.1.12 (subnet mask 255.255.255.0) and the other one 10.1.2.20 (subnet mask 255.255.255.0). I want to be able to reach 10.1.1.12 from 10.1.2.20. Since packet forwarding is enabled in the router and the policy of the FORWARD chain is ACCEPT (and there are no other rules), I understand that there should be no problem to ping from 10.1.2.20 to 10.1.1.12 going through the router. However, this is what I get: $ ping -c15 10.1.1.12 PING 10.1.1.12 (10.1.1.12): 56 data bytes Request timeout for icmp_seq 0 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 81d4 0 0000 3f 01 e2b3 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 1 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 899b 0 0000 3f 01 daec 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 2 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 78fe 0 0000 3f 01 eb89 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 3 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 14b8 0 0000 3f 01 4fd0 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 4 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 8ef7 0 0000 3f 01 d590 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 5 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 ec9d 0 0000 3f 01 77ea 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 6 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 70e6 0 0000 3f 01 f3a1 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 7 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 b0d2 0 0000 3f 01 b3b5 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 8 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 f8b4 0 0000 3f 01 6bd3 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 9 Request timeout for icmp_seq 10 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 1c95 0 0000 3f 01 47f3 10.1.2.20 10.1.1.12 Request timeout for icmp_seq 11 Request timeout for icmp_seq 12 Request timeout for icmp_seq 13 92 bytes from router2.mydomain.com (10.1.2.1): Redirect Host(New addr: 10.1.1.12) Vr HL TOS Len ID Flg off TTL Pro cks Src Dst 4 5 00 0054 62bc 0 0000 3f 01 01cc 10.1.2.20 10.1.1.12 Why does this happen? From what I've read the Redirect Host response has something to do with the fact that the two hosts are in the same network and there being a shorter route (or so I understood). They are in fact in the same physical network, but why would there be a better route if they are not on the same subnet (they can't see each other)? What am I missing? Some extra info you might want to see: # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.8.0.2 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 127.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 lo 192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth3 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 192.168.1.0 0.0.0.0 255.255.255.0 U 1 0 0 eth2 10.1.4.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 10.1.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 10.1.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 10.1.3.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth2 0.0.0.0 192.168.2.1 0.0.0.0 UG 100 0 0 eth3 # iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination # iptables -L -n -t nat Chain PREROUTING (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- !10.0.0.0/8 10.0.0.0/8 MASQUERADE all -- 10.0.0.0/8 !10.0.0.0/8 Chain OUTPUT (policy ACCEPT) target prot opt source destination

    Read the article

  • Array on servers which receive several hundred GB of data a day

    - by Matthew
    This is hopefully a simple question. Right now we are deploying servers which will serve as data warehouses. I know with raid 5 the best practice is 6 disks per raid 5. However, our plan is to use RAID 10 (both for performance and safety). We have a total of 14 disks (16 actually, but two are being used for OS). Keeping in mind that performance is very much an issue, which is better - doing several raid 1's? Do one large raid 10? One large raid 10 had been our original plan, but I want to see if anyone has any opinions I haven't thought of. Please note: This system was designed for using Raid 1+0, so losing half of the raw storage capacity is not an issue. Sorry i hadn't mentioned that initially. The concern is more whether or not we want to use one large Raid 1+0 containing all 14 disks, or several smaller raid 1+0's and then stripe across them using LVM. I know the best practice for higher raid levels is to never use more than 6 disks in an array.

    Read the article

  • Too many concurrent connections Exchange 2010. What else is there to check?

    - by hydroparadise
    I thought that I had this under control before. But for some reason during our last email marketing promo, I start receiving from our mass email client (built in house).. The message could not be sent to the SMTP server. The transport error code is 0x800ccc67. The server repsonse was 421 4.3.2 The maximum number of concurrent connections has exceeded a limit, closing transmission channel again. There's several places I've checked to make sure that wouldn't be an issue. First I checked that receive connector was set to receive an adequate number of connections on our relay connector (1000 connections). Then, I would later find out about Throttling Policies. I created one and set all the properties I knew to set in terms of the policy following properties to 1000; EWSMaxConcurrency, OWAMaxConcurrency, CPAMaxConcurrency, and CPAMaxConcurrency. Still, the email client starts receiving the error shortly after 100 has been sent and takes about 15-30 seconds. The process is then repeatable, but still the error gets received at the same spot everytime. Is there a rate setting that I am missing? Was there a windows update that I missed looking at? Should the software have it's own throttling feature?

    Read the article

  • FastCGI Error Access to the script denied

    - by ArtWorkAD
    I have a Debian Squeeze server running nginx + php-fpm + fastcgi. I have a typo3 installation on this server which runs well. No I installed OTRS and I get an error that I do not understand: 2012/06/25 15:35:38 [error] 16510#0: *34 FastCGI sent in stderr: "Access to the script '/opt/otrs/bin/fcgi-bin/index.pl' has been denied (see security.limit_extensions)" while reading response header from upstream, client: ..., server: support.....com, request: "GET /otrs/index.pl HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "support.....com", referrer: "http://support.....com/" Why do I get this error? The otrs directory is writable for the webserver, so this is not the problem. Any ideas?

    Read the article

  • Why does Hyper-V and Windows Backup crash (BSOD) after successfull backup?

    - by Payson Welch
    Hello I am running Server 2008 R2 with a handful of Hyper-V guest nodes. If Windows backup runs without any of the Hyper-V nodes running, the server is fine. If Hyper-V runs a backup while the Hyper-V nodes are running, it is fine until a few minutes after the backup completes, and then it BSODs. The storage location for the backup is iSCSI - I am wondering if anyone has any input on what might be causing this? I don't have the Hyper-V nodes setup on a vlan and there is only one NIC on the server. Is it possible this is a networking / driver issue, and if so how would I reconfigure the networking to fix this?

    Read the article

  • cannot get mssql working with sql server 2005

    - by Ryan
    I'm a MySQL/Apache user, trying my hand with IIS and SQL server, so please, if this is a stupid question have patience. I'm using IIS version 7.5. PHP version 5.3.13 and SQL server 2005 IIS is running on port 90, not sure if that will make a difference or not. I know my sql server is running because I can explore/connect to it in Server management studio. I know php is configured properly, because //localhost:90/phpinfo.php works fine. I updated the php_msql.dll extension in phpinfo to: extension=ext/php_msql.dll EDIT- However, when I run phpinfo() under the "configure command" row, this is present: --without-mssql I found/downloaded the ntwdblib.dll and placed it in both sys32 and php root. All these things were supposed to fix the issue, and they haven't. This is the code I'm using, straight from php.net: <?php // Server in the this format: <computer>\<instance name> or // <server>,<port> when using a non default port number $server = 'localhost'; // Connect to MSSQL $link = mssql_connect($server, 'uname', 'pwd'); if (!$link) { die('Something went wrong while connecting to MSSQL'); } ?> obviously I'm using a real username and password, but when I load the file in my browser, I receive a 500 error. Upon checking the log, this is what is displayed: 2012-06-25 12:41:29 ::1 GET /test.php - 90 - ::1 Mozilla/5.0+(Windows+NT+6.1;+WOW64)+AppleWebKit/536.5+(KHTML,+like+Gecko)+Chrome/19.0.1084.56+Safari/536.5 500 0 0 5 That (to me) doesn't help me much. What am I doing wrong? Thank you

    Read the article

  • Windows Server 2008 Active Directory DNS setup

    - by Mister IT Guru
    I have to setup a small windows network inside my bigger linux/mac infrastructure. In order to get the windows clients logging onto the domain, I have had to make the DC their primary DNS server, which seems to have worked. I would much prefer to have one DNS server running on my network, or at least one authoritative server running on the network. I have a USG 200 router/firewall and I can configure some static records for DNS, but I an not sure what I need to put in order to get DNS and AD working together, and hints and tips appreciated.

    Read the article

  • Apache proxy pass in nginx

    - by summerbulb
    I have the following configuration in Apache: RewriteEngine On #APP ProxyPass /abc/ http://remote.com/abc/ ProxyPassReverse /abc/ http://remote.com/abc/ #APP2 ProxyPass /efg/ http://remote.com/efg/ ProxyPassReverse /efg/ http://remote.com/efg/ I am trying to have the same configuration in nginx. After reading some links, this is what I have : server { listen 8081; server_name localhost; proxy_redirect http://localhost:8081/ http://remote.com/; location ^~/abc/ { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://remote.com/abc/; } location ^~/efg/ { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://remote.com/efg/; } } I already have the following configuration: server { listen 8080; server_name localhost; location / { root html; index index.html index.htm; } location ^~/myAPP { alias path/to/app; index main.html; } location ^~/myAPP/images { alias another/path/to/images autoindex on; } } The idea here is to overcome a same-origin-policy problem. The main pages are on localhost:8080 but we need ajax calls to http://remote.com/abc. Both domains are under my control. Using the above configuration, the ajax calls either don't reach the remote server or get cut off because of the cross origin. The above solution worked in Apache and isn't working in nginx, so I am assuming it's a configuration problem. I think there is an implicit question here: should I have two server declarations or should I somehow merge them into one? EDIT: Added some more information EDIT2: I've moved all the proxy_pass configuration into the main server declaration and changed all the ajax calls to go through port 8080. I am now getting a new error: 502 Connection reset by peer. Wireshark shows packets going out to http://remote.com with a bad IP header checksum.

    Read the article

  • solaris + dladm + what is unknown state and how to bring it to up?

    - by yael
    I installed Solaris 10 on my netra machine from dladm show-dev I can see which interface are down or up all interfaces are connected to the Cisco switch , and all leds are light's on all LAN cards but I not understand why all interfaces except e1000g0 are in unknown ? Please advice how to bring the unknown interfaces to up ? # dladm show-dev e1000g0 link: up speed: 1000 Mbps duplex: full e1000g1 link: unknown speed: 0 Mbps duplex: unknown e1000g2 link: unknown speed: 0 Mbps duplex: unknown e1000g3 link: unknown speed: 0 Mbps duplex: unknown nxge0 link: unknown speed: 0 Mbps duplex: unknown nxge1 link: unknown speed: 0 Mbps duplex: unknown nxge2 link: unknown speed: 0 Mbps duplex: unknown nxge3 link: unknown speed: 0 Mbps duplex: unknown

    Read the article

  • Cisco IOS PBR - PBRing Skype

    - by Azz
    I've got a very simple question, which seems to be extremely difficult when put into practice. I have a Cisco IOS router with two Internet links (one over a WAN, through a proxy, everywhere, etc.) the other direct Internet. Most traffic destined for the internet goes through the proxy over the WAN. I want Skype traffic (why the client uses skype, I don't know..) to go out of the Internet link, while the rest of the traffic goes over the WAN through the proxy, etc. Apparently skype is very difficult to detect/classify because of it's many adaptations to being blocked. Is there any way to identify Skype on an IOS router (2911), and set it's next hop IP/interface? Thank you, Aaron

    Read the article

  • Sendmail background process sometimes processes queue, but sendmail -q always works

    - by markmcb
    I'm using sendmail version 8.14.4 on Fedora 15 to send email. My Rails app uses delayed_job to queue up emails. Messages will queue up in /var/spool/mqueue as expected, but don't always get processed. I can see the messages and sendmail is definitely running in the background. Restarting the process does nothing. However, when I issue the sendmail -q command, sendmail gets to work and starts sending. The really odd thing is that this behavior only occurs sometimes. Other times message queue up and are delivered as expected. I've tried tweaking various sendmail configs to reduce the time between queue processing (for example, adding define('confMIN_QUEUE_AGE', '0')dnl to /etc/mail/sendmail.mc), but nothing seems to do the trick. Any ideas what might be the root cause?

    Read the article

  • Grep all files in a directory and print matches with file name

    - by javanix
    I have a list of log files that I create as part of a video encoding script that I wrote. I would like to search all of them and print out certain statistics from the encode - how fast they were encoded, what settings were used, etc. I can search for the average framerate in one file via this 1 liner: cat ${filename} | grep average which outputs: work: average encoding speed for job is 23.211176 fps and search for the ratefactor: cat ${filename} | grep RF I would like to search all files in the directory and print off one, or prefereably both pieces of information along with the filename. Is there any way I can use find or grep to get this in a one-liner, or do I need to write a script? I would like output like this: /home/javanix/filename.log <RF line> <average line> I would like this to either work using FreeBSD 9 or Ubuntu 12.04.

    Read the article

  • Terminate child processes on ctrl-c

    - by jackweirdy
    In tiny core linux, I have the following script: #!/bin/sh # ~/.X.d/freerdp.sh rdp(){ while true do xfreerdp -f [IP Address] done } rdp & It's pretty simple; when X starts up and checks the .X.d directory (as is the case in tiny core) it finds and executes this script. The script starts up freerdp and keeps a connection open to the server by restarting it whenever it closes. As you can see from the rdp & line, the function is run in the background to allow X to continue its startup routine. The problem is that whenever I cancel X with a Ctrl-Alt-Backspace the rdp process doesn't die. I'm looking for a way to kill the process as soon as X finishes, either through: A) a script, executed on X closing, which kills the process or B) by modifying the script to check the return value of the xfreerdp command. NB - if the solution does check the return value, it must only end if the command fails to open the X display. For that reason, if you could point me to a reference for xfreerdp return values I'd be grateful.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >