Search Results

Search found 11178 results on 448 pages for 'syntax checking'.

Page 335/448 | < Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >

  • Exchange 2010 - Certificate error on internal Outlook 2013 connections

    - by Lorenz Meyer
    I have an Exchange 2010 and Outlook 2003. The exchange server has a wildcard SSL certificate installed *.domain.com, (for use with autodiscover.domain.com and mail.domain.com). The local fqdn of the Exchange server is exch.domain.local. With this configuration there is no problem. Now I started upgrading all Outlook 2003 to Outlook 2013, and I start to get consistently a certificate error in Outlook : The Name on the security certificate is invalid or does not match the name of the site I understand why I get that error: Outlook 2013 is connecting to exch.domain.local while the certificate is for *.domain.com. I was ready to buy a SAN (Subject Alternate Names) Certificate, that contains the three domains exch.domain.local, mail.domain.com, autodiscover.domain.com. But there is a hindrance: the certificate provider (in my case Godaddy) requires that the domain is validated as being our property. Now it is not possible for an internal domain that is not accessible from the internet. So this turns out not to be an option. Create self-signed SAN certificate with an Enterprise CA is an other option that is barely viable: There would be certificate error with every access to webmail, and I had to install the certificate on all Outlook clients. What is a recommended viable solution ? Is it possible to disable certificate checking in Outlook ? Or how could I change the Exchange server configuration so that the public domain name is used for all connections ? Or is there another solution I'm not thinking of ? Any advice is welcome.

    Read the article

  • Office 2010 OCT Outlook Filepaths

    - by vlannoob
    I'm playing around with customizing Office 2010 installs on my network, normally I just do a full manual install, but as the environment grows and the lazier I get its becoming a pain to do it manually every time. I've read up and downloaded the Office 2010 OCT tool and it looks relatively straight forward - with one exception - the Outlook Profile. I can 'get around it' by just leaving it all as default (or not enabling offline use) but I'd like to customise it slightly so that its all setup no matter who logs onto the PC. The only issue I have, and my question is: In the OCT - Outlook section What do you enter into the Path and Filename for the OST file and the Offline Address book seetings under Enable Offline Use section? I'm sweet with everything else - just that one section, and I think if I bugger that one it will kill the whole Outlook Profile?? It would need to go into each users unique filepath for their profile correct? I have a fair idea of what should be there but I'm struggling with the correct syntax. I know this is a stupid question....but its late in the day and my brain is fried ;) As usual - any and all help/assistance is appreciated ;)

    Read the article

  • Does Active Directory on Server 2003 R2 support IPv6 subnets in Sites and Services?

    - by NorbyTheGeek
    I've been experimenting with IPv6 at our organization. The domain controllers (all 2003 R2) and most of the servers (2003 R2 / 2008 / 2008 R2) have IPv6 configured. We have a subnet assigned through a tunnel provider. Currently, the only workstation that is running IPv6 is mine. (Windows 7) I have been noticing that my workstation is picking domain controllers in other sites for things like DFS, and I finally realized that I don't have the IPv6 subnets set up in Active Directory Sites and Services (ADSS). But when I try to add a IPv6 prefix in ADSS, it tells me: Windows cannot create the object 2001:xxxx:xxxx:xxxx::/64 because: The object name has bad syntax. I believe I may be using the 2008 version of the admin tools (ADSS reports version 6.1.7601.17514) so I'm wondering if maybe my 2003 R2 Active Directory schema doesn't support configuring IPv6 subnets in ADSS. Is this true? UPDATE Even with 2008 R2 schema in Active Directory, I'm having the same problem. How can I get my IPv6 subnets into Sites and Services?

    Read the article

  • Data recovery from corrupt Ubuntu partition/directory (question about a previous answer)

    - by JoshMaurice
    I have an Ubuntu installation that won't boot anymore. I asked my previous question about it here: http://superuser.com/questions/15916/ubuntu-chkdsk-equivalent Bolotov replied: As I see from your previous question you can boot Windows so you could use dskprobe from Windows XP Service Pack 2 Support Tools to make sure that fs type is correct ... but it's already correct fs type 7 is NTFS. Message "The type of the filesystem is RAW. CHKDSK is not available for RAW drives." means that windows can't determine fs type for some reason. As we see fs type is correct. To run Chkdsk on your Windows partition you can install Windows Recovery Console, boot in recovery console and check your disk. After checking the disk you will gain access to you c:\ubuntu\disks. I think you can mount your linux partition (which is in file) as usual loop-back device: mount -o loop [path to your linux-loopback-partition] But you should mount windows patrition first. So now I'd like to know: Within the recovery console I will be issuing the commands "chkdsk -r" and then "mount -o loop [path to windows partition]" and then "mount -o loop c:\ubuntu\disks", correct? I do have a ("corrupt and unreadable") c:\ubuntu\disks directory so that appears to be the correct path to the linux partition; do you know the path to the windows partition? would that be just "c:\"?

    Read the article

  • How to monitor bandwidth use of each device on wifi network

    - by GWLlosa
    I have in my home a standard Comcast cable internet connection. I have it going from the wall to a cable modem, and from the modem to a late-series Linksys router, which provides wired and wireless networking. The vast majority of the users are wireless connections. For day-to-day tasks, this connection is fully sufficient for all my needs. However, on regular occassions, we have social gatherings that involve many people bringing laptops and other PCs and using the network and internet simultaneously, frequently for gaming. I have no administrative oversight over these machines; they have been known to be riddled with spyware and/or bloatware or be running torrents, legal or otherwise. The only reason I care is that on a regular basis, one of the machines will flatline my internet bandwith, and consume it all in order to upload/download/spam people/whatever. When this happens, the latency of the connections for gaming and the like becomes unacceptable, and everyone suffers. My question is: Is there a system I can set up whereby I can easily monitor the various systems connected to my wireless connection, see how much bandwith each one is using, and for what ends? That way, at a glance, I can spot the offending machine and kick it from the connection, without having to go from machine to machine, checking each one's "bandwith used" properties manually, and dealing with the owner's indignant protests all the while. I understand this will likely involve 3rd-party software and/or hardware; my issue is I don't even know where to begin.

    Read the article

  • Measuring performance indicators on a cluster

    - by Aditya Singh
    My architecture is based on Amazon. A ELB load balancer balances POST requests among m1.large instances. Every instance has a nginx server on port 80 which distributes the requests to 4 python-tornado servers on backend which handle the request. These tornado servers are taking about 5 - 10ms to respond to one request but this is the internal compute time of every request. I want to put this thing on test and i want to measure the response time from ELB to upstream and back and how does it vary when the QPS throughput is increased and plot a graph of Time vs. QPS vs. Latency and other factors like CPU and Memory. Is there a software to do that or should i log everything somewhere with latency checks and then analyze the whole log to get the stuff out. I would also need to write a self-monitor which keeps checking the whole response time. Is it possible to do it with a script from within the server. If so, will it be accurate ?

    Read the article

  • port forwarding/network settings preventing from game hosting

    - by Xitcod13
    I asked where to post this question on stackoverflow meta and they directed me here. Im on wireless connection and I want to host games in StarCraft: Brood War and i've been looking everywhere on how to accomplish that. My internet is amazingly fast so its not an internet problem (and when i play other peoples games dont experience lag) I found out that i need to have a static IP but I have already checked that i do (i downloaded a program to make my id static and it already was; The program asked for which router I used So i think it checked the router settings already) I found out that i need to allow Sc access through the firewall which i already did (i have zone-alarm but I allowed it everything possible except receiving emails lol) I have recently noticed that few people actually can join my games but most of them cannot. I dont know whats going on here. I really want to be able to host games overall how do I go about checking what is wrong with the network. Update: Alright I figured out what i did wrong in the first part I did not actually set up forwarding on the router -.- I have tried to fix my mistake. I went to forwarding options in my router (as this guide for my specific router suggests) but when i click ok I get a message incorrect ip address. 192.168.1.1 is my routers address. The default address that appears there is 192.168.1 (blank) I have set it to my computers current Ip4 adress which 192.168.1.23 I hope this works If so i will post it as an answer and mark it.

    Read the article

  • How to quickly check if two columns in Excel are equivalent in value?

    - by mindless.panda
    I am interested in taking two columns and getting a quick answer on whether they are equivalent in value or not. Let me show you what I mean: So its trivial to make another column (EQUAL) that does a simple compare for each pair of cells in the two columns. It's also trivial to use conditional formatting on one of the two, checking its value against the other. The problem is both of these methods require scanning the third column or the color of one of the columns. Often I am doing this for columns that are very, very long, and visual verification would take too long and neither do I trust my eyes. I could use a pivot table to summarize the EQUAL column and see if any FALSE entries occur. I could also enable filtering and click on the filter on EQUAL and see what entries are shown. Again, all of these methods are time consuming for what seems to be such a simple computational task. What I'm interested in finding out is if there is a single cell formula that answers the question. I attempted one above in the screenshot, but clearly it doesn't do what I expected, since A10 does not equal B10. Anyone know of one that works or some other method that accomplishes this?

    Read the article

  • unicorn and nginx, went wrong

    - by achempion
    I try to deploy my app via capistrano. It was done, but when I start to nginx and show my site in the browser I see 'We're sorry, but something went wrong.' It is bad. I use unicorn. See my configs https://gist.github.com/3904032 I try to start server via rails s -e prodiction and it's work! I think that this error may be because I can't restart server root@li272-194:~# /etc/init.d/nginx restart Restarting nginx: the configuration file /etc/nginx/nginx.conf syntax is ok configuration file /etc/nginx/nginx.conf test is successful [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: still could not bind() nginx. any ideas? nginx log 2012/10/17 02:57:41 [error] 3271#0: *1 could not find named location "@myapp", client: 91.192.62.77, server: 178.79.153.194, request: "GET / HTTP/1.1", host: "178.79.153.194" 2012/10/17 02:19:08 [crit] 2448#0: *8 connect() to unix:/srv/zarcon/shared/unicorn.sock failed (2: No such file or directory) while connecting to upstream, client: 91.192.62.77, server: zarkon, request: "GET / HTTP/1.1", upstream: "http://unix:/srv/zarcon/shared/unicorn.sock:/", host: "178.79.153.194"

    Read the article

  • Check for unique rows, but ignore one particular column

    - by user269148
    I have an XML document, that looks like this: Column A to S with headers, and there are 1922 rows. This is an backup of some SMS, and I want to get rid of duplicates. The problem is, that the Time in the readable_date header has been messed up. There is nothing wrong with the date, but the clock time is wrong, so I have split that column in three, with Year, day and clock. I know I can use a standard filter, but it only looks for unique rows in a single column. What I want to perform, is to make a row check similar to this: F(x)=Check if Column 2A to (infinate) is equal to Column 3A to (infinate), but ignore column(R). IF True, then delete Column 3A to (infinate) Otherwise Check IF column 2A to (infinate) is equal Column 4A to (infinate) and so on. I need to ignore a particular column in a row every time, and need to do this for a complete sheet. And the formula check should apply for every row, when the first one is done checking for duplicates... If anyone else has a better solution, please say so. Anyway, anyone who can help?

    Read the article

  • Why are my Windows 7 updates continuously failing?

    - by Chris C.
    I'm an advanced level user here with an odd issue. I have two Windows Updates that are failing to install, every single time. I'm getting a mysterious "Code 1" error on both updates, an error for which I'm having difficulty finding a solution. The updates in question are: Security Update for Microsoft Visual C++ 2008 Service Pack 1 Redistributable Package (KB2538243) System Update Readiness Tool for Windows 7 for x64-based Systems (KB947821) [May 2011] Because these updates are failing, the Shut Down button in my start menu always has the shield icon next to it, indicating that "new" updates will be installed on shut down. But, of course, they'll fail and when the PC is restarted, the shield icon is still there. When checking the update history and viewing the details of the failed updates, I get the following: Security Update for Microsoft Visual C++ 2008 Service Pack 1 Redistributable Package (KB2538243) Installation date: ?6/?29/?2011 3:00 AM Installation status: Failed Error details: Code 1 Update type: Important A security issue has been identified leading to MFC application vulnerability in DLL planting due to MFC not specifying the full path to system/localization DLLs. You can protect your computer by installing this update from Microsoft. After you install this item, you may have to restart your computer. More information: http://go.microsoft.com/fwlink/?LinkId=216803 and: System Update Readiness Tool for Windows 7 for x64-based Systems (KB947821) [May 2011] Installation date: ?6/?28/?2011 3:00 AM Installation status: Failed Error details: Code 1 Update type: Important This tool is being offered because an inconsistency was found in the Windows servicing store which may prevent the successful installation of future updates, service packs, and software. This tool checks your computer for such inconsistencies and tries to resolve issues if found. More information: http://support.microsoft.com/kb/947821 About My System I'm running Windows 7 Home Premium 64-bit. This is a custom PC build and the OS was installed fresh, not an upgrade from a previous version. I've been running this system for about four months. Windows Updates aside, the system is usually quite stable.

    Read the article

  • Cannot write to directory after taking ownership

    - by jeff charles
    I had a directory on an internal hard-drive that was created in an old Windows 7 install. After re-installing my operating system, when I try to create a new directory inside that directory, I get an Access Denied message. This isn't a protected directory, just a random directory I created at the drive root (that drive was not the C drive in either install). I tried to take ownership by opening folder properties, going to the Security tab, clicking on Advanced, going to Owner tab, clicking on Edit, selecting my user account, checking Replace owner on subcontainers and objects, and clicking Apply. There were no error messages and I closed the dialogs. I rebooted, checked the owner on that folder and a couple subfolders and it appears to be set correctly. I am still getting an Access Denied message however when trying to create a directory in it. I've also tried using attrib -R . to remove any possible readonly attribute inside the directory in an admin command prompt but am still unable to create a directory using a non-admin prompt (it does work in an admin prompt). Is there anything I can do to get write access to that folder and it's subcontents in a non-elevated context without disabling UAC?

    Read the article

  • PHP application failed to connect after a network plugged back in

    - by tntu
    My data-center appears to have had some issues with their network and thus my server has suffered from on an off network connectivity for about an hour. After the connection has been completely re-established my code still kept reporting the same issue over and over until I have restarted the service. The code is a simple PHP code that loops forever checking the Apple feed-back server and then sleeps for a few minutes and then it begins all over again. Now I understand the error being generated if the network is down but once it got back up why did it continue until I have restarted the code? Does PHP have something that needs to be re-initialized or something?? Messges log: Dec 20 08:57:22 server kernel: r8169: eth0: link down Dec 20 08:57:28 server kernel: r8169 0000:06:00.0: eth0: link up Dec 20 08:57:29 server kernel: r8169: eth0: link down Dec 20 08:57:33 server kernel: r8169 0000:06:00.0: eth0: link up Dec 20 08:57:33 server kernel: r8169: eth0: link down Dec 20 08:57:37 server kernel: r8169 0000:06:00.0: eth0: link up Dec 20 08:57:38 server kernel: r8169: eth0: link down Dec 20 08:57:44 server kernel: r8169 0000:06:00.0: eth0: link up Dec 20 08:57:44 server kernel: r8169: eth0: link down Dec 20 08:57:52 server kernel: r8169 0000:06:00.0: eth0: link up Dec 20 08:57:52 server kernel: r8169: eth0: link down Dec 20 09:10:58 server kernel: r8169 0000:06:00.0: eth0: link up PHP Error: PHP Warning: stream_socket_client(): php_network_getaddresses: getaddrinfo failed: Name or service not known in /home/push/feedback.php on line 36 Code Line 36: $apns = stream_socket_client('ssl://feedback.sandbox.push.apple.com:2196', $errcode, $errstr, 60, STREAM_CLIENT_CONNECT, $stream_context);

    Read the article

  • Can a non-redundant RAID5 cause any serious problems (compared to RAID0)?

    - by leemes
    I used to have a three-disc RAID5 (mdadm) in my computer for personal media storage (music, videos, photos, programs, games, ...). It had three discs with 750 GB each, resulting in an array capacity of 1.5 TB. One day (one year ago), I needed one of those discs to install another operating system. I thought, I don't need the redundancy anymore since I backup the most important stuff (personal photos e.g.) on an external disc anyway. So I decided to remove one of the three discs without converting the RAID to RAID0 or even two separate discs, because I had no temporary storage (since one cannot simply convert the RAID5 to RAID0 AFAIK). So now, for about one year, I have a non-redundant RAID5 with 2 of 3 discs running. Sometimes, one of the discs has a defective contact at the power cable or something similar causing the drive to stop working temporarily (I don't know exactly what it is). Since it still works when rebooting the computer and in most cases by calling some mdadm commands, it wasn't that problematic. Note that the data is not very critical, since I still have a backup of the most important stuff. But in the last few weeks, one of the drives fails very frequently (every few hours), so it gets really annoying to manage this. My questions are: Is there any disadvantage (apart from the annoying management) of a non-redundant RAID5 (with one drive less than typical) over a RAID0? If I understand it correctly, both have no redundancy and the same capacity. On a temporary drive failure, I can restart the array in both cases, assuming that the drive itself still works after the failure. Can it happen that the drive contents alter on a drive failure, making the array inconsistent? If so, can I tell mdadm to check the array for failures (without a file system level checking tool)? Since the drive most probably only has a defective contact causing it to fail for a second only, can I tell mdadm to automatically restart the array, so I will not even notice the failure if no application wanted to access the file system during the failure?

    Read the article

  • Changing the prompt in telnet

    - by wim
    With some help from people on here, I was able to set a custom prompt in an ssh session (thanks!). Now I need to do the same in telnet, but I'm not sure of what syntax I could use for that. Basically the telnet prompt is just a > character, I need to modify it to something I can more reliably detect in automation jobs. Hope this makes sense. From inside telnet, trying to escape that command with a bang like !PS1=spam and !PS2=eggs did not change it. wim@wim-acer:~$ ssh [email protected] -i ~/.ssh/guest_nopassphrase -t "export PS1='Sending a custom prompt \w \$ '; exec sh" Sending a custom prompt ~ $ set HOME='/var/tmp' IFS=' ' LOGNAME='guest' PATH='/sbin:/usr/sbin:/bin:/usr/bin' PPID='1128' PS1='Sending a custom prompt \w $ ' PS2='> ' PS4='+ ' PWD='' SHELL='/bin/sh' TERM='xterm' USER='guest' Sending a custom prompt ~ $ telnet localhost <snip> Entering character mode Escape character is '^]'. > !set CONSOLE='/dev/ttyp0' HOME='/var/tmp' IFS=' ' LOGNAME='root' PATH='/sbin:/bin:/usr/sbin:/usr/bin' PPID='546' PREVLEVEL='N' PS1='\w \$ ' PS2='> ' PS4='+ ' PWD='/var/tmp' RESPAWN_COUNT='1' RESPAWN_LAST='0' RESPAWN_MAX='5' RESPAWN_TIME='5' ROOTDEV='/dev/sla1' RUNLEVEL='5' SHELL='/bin/false' TERM='linux' USER='root' > > Connection closed by foreign host Sending a custom prompt ~ $ Connection to 192.168.1.124 closed. wim@wim-acer:~$

    Read the article

  • Preventing endless forwarding with two routers

    - by jarmund
    The network in quesiton looks basically like this: /----Inet1 / H1---[111.0/24]---GW1---[99.0/24] \----GW2-----Inet2 Device explaination H1: Host with IP 192.168.111.47 GW1: Linux box with IPs 192.168.111.1 and 192.168.99.2, as well as its own route to the internet. GW2: Generic wireless router with IP 192.168.99.1 and its own route to the internet. Inet1 & Inet2: Two possible routes to the internet In short: H has more than one possible route to the internet. H is supposed to only access the internet via GW2 when that link is up, so GW1 has some policy based routing special just for H1: ip rule add from 192.168.111.47 table 991 ip route add default via 192.168.99.1 table 991 While this works as long as GW2 has a direct link to the internet, the problem occurs when that link is down. What then happens is that GW2 forwards the packet back to GW1, which again forwards back to GW2, creating an endless loop of TCP-pingpong. The preferred result would be that the packet was just dropped. Is there something that can be done with iptables on GW1 to prevent this? Basically, an iptables-friendly version of "If packet comes from GW2, but originated from H1, drop it" Note1: It is preferable not to change anything on GW2. Note2: H1 needs to be able to talk to both GW1 and GW2, and vice versa, but only GW2 should lead to the internet TLDR; H1 should only be allowed internet access via GW2, but still needs to be able to talk to both GW1 and GW2. EDIT: The interfaces for GW1 are br0.105 for the '99' network, and br0.111 for the '111' network. The sollution may or may not be obnoxiously simple, but i have not been able to produce the proper iptables syntax myself, so help would be most appreciated. PS: This is a follow-up question from this question

    Read the article

  • Provider claiming "all web servers in the cloud are automatically kept in sync" - should I be skeptical?

    - by RobMasters
    I'm no expert in cloud computing - I've spent a fair bit of time researching it and various providers but am yet to get any hands-on experience with it. From what I've read about AWS and auto-scaling EC2 instances though, it seems as though each instance should be completely decoupled from all other instances. i.e. If content is uploaded to the web server's local filesystem from a custom CMS backend then that content won't be available if subsequently requested from a different web server in the auto-scaling group. Is that right? I met with a representative of our existing hosting provider recently and he was claiming that it isn't a problem that our legacy CMS system is highly dependent on having a local filesystem. He said that all web servers, regardless of how many, would be kept as exact duplicates so I shouldn't notice any difference compared to our existing setup of a single dedicated server. This smells a little too much like bull fecal-matter to me...should I be skeptical about this? I'm a little worried because my (non-technical) boss who ultimately makes the decisions is all for signing up to this cloud solution because it won't require any extra work. I'm sure that they must at least be able to provide this, otherwise they wouldn't be attempting to sell it to us. But at what cost? It sounds as though each web server will always need to be checking the other web server(s) for new static content, which to me sounds like unwanted overhead that'll slow things down. I'd really appreciate it if somebody could clear this up to me. I'm all for switching to AWS and using S3+CloudFront for all static content, but that isn't looking very likely to happen at the moment.

    Read the article

  • How to Access User Directory shared by Apache on OS X Mountain Lion?

    - by schluchc
    When trying to access the local user web page on localhost/~username, I get a "403 Forbidden". The system web page in /Library/WebServer/Documents is accessible on localhost/ though, so I assume Apache is working fine. I know that this problem has been discussed several times, also on superuser. I implemented and checked all I could find, but I still couldn't solve the problem and would be glad if someone had a suggestion for this particular case: sudo apachectl -t returns Syntax OK. I have a username.conf file in /etc/apache2/users/: <Directory "/Users/username/Sites/"> Options Indexes MultiViews FollowSymLinks AllowOverride AuthConfig Limit Order allow,deny Allow from all </Directory> as proposed here [SuperUser] and in several other tutorials. The permissions of the username.conf file are -rw-r--r-- root wheel, as they should be. The httpd.conf is unchanged and therefore contains the line Include /private/etc/apache2/extra/httpd-userdir.conf. That file in turn contains UserDir Sites Include /private/etc/apache2/users/*.conf <IfModule bonjour_module> RegisterUserSite customized-users </IfModule> So the httpd*.conf files should be ok. The permissions of /Users/username/Sites is drwxr-xr-x 10 username staff and -rw-r--r--@ 1 username staff for the index.html. In the error log I simply get a [Sun Nov 25 22:14:32 2012] [error] [client 127.0.0.1] (13)Permission denied: access to /~username/ denied. And yes, after each change I did the sudo apachectl restart. Any help no how to solve the problem or how to further analyze it would be highly appreciated!

    Read the article

  • Servers/Websites Keep Going Down

    - by Tyler Johnson
    Okay, I'm a noobie. I know how to build and compose a website, but I have no idea what I'm doing when it comes to servers and server commands, etc. I've recently had a problem with all of my sites on our servers going down all at once and then I have to go in and reboot the server for them to come up again. At first this was annoying, but now it is becoming agonizing as it now takes 3-4 reboots for the websites to come back up. I contacted support for my hosting, but they are not being very helpful. They just keep telling me what the issue might be and basically telling me that I'm going to have to look into it and figure it out, which really isn't possible since I know nothing. Anyway, here are the things they said were possible reasons: They said I have "strange logs" in my Apache webserver log, error: sh: fetch: command not found. My php.ini memory limit is: 256M which is very high. It should be 32M or 64M. Server is reaching Max Clients, meaning we have more than 150 visitors at a time. (They supposedly "fixed" this, but the sites/server are still going down) I have some Wordpress sites with plugins getting errors like: PHP Warning: pack(): Type H: illegal hex digit G in... PHP Fatal error: Cannot use object of type stdClass as array in... PHP Fatal error: Maximum execution time of 30 seconds exceeded in... PHP Fatal error: Call to undefined function file_exists() in... PHP Parse error: syntax error, unexpected '<' I know that's a lot, but I really am at wits end and have no idea what to do now. If anyone could maybe give me some advice or point me in the right direction I would greatly appreciate it! Thanks! Oh, and here are the specs for my server: RAM: 2048MB CPU Shares: 40 Primary Disk: 50GB Data Transfer: 75GB Port Speed: 5Mbps Type: Linux

    Read the article

  • How to configure apache to basic authentication or allow when ntlm while proxying?

    - by trotzim
    Here is my study case: browser --- apache proxy --- ISA server --- internet The ISA server requires an authentication. The issue is to allow HTTPS through the two proxies. A configuration that works with HTTP is something like this: (yes, I don't want to use ProxyPass but ProxyRequests) <virtualhost *:8080> ... SetEnv auth-proxy-chain on ... ProxyRequests On ProxyRemote * http://isaproxy:80 ... <proxy *> AuthName "ISA server auth" AuthType Basic [here a module to authenticate] require valid-user Allow from all </proxy> ... </virtualhost> The user can authenticate on the apache proxy then the authentication chain is sent to the ISA server that allows the HTTP trafic. But, while the browser switchs to HTTPS, the ISA server "speaks" NTLM and breaks the authentication on the apache proxy. If I try to use the SSPI module (ntlm) with something like this: blablabla <proxy *> AuthName "ISA server auth" AuthType ntlm [ SSPI stuff ] Require valid-user Allow from all </proxy> The apache server reject the authentication (or the ISA server I don't really know). I use wireshark to look at the nominal process while using directly the ISA server as proxy. The first auth-chain is a BASIC type then it switchs to NTLM (and the challenge continues with NTLM). How should I configure apache that it transfers the NTLM authentication to the ISA proxy without checking it(*)? Or to rewrite headers to force BASIC authentication? (*) It seems not to be as easy as it seems...

    Read the article

  • FreeBSD: problem with Postfix after updating LDAP

    - by Olexandr
    At the server I installed openldap-server, at this computer open-ldap client has already been installed. Version of openldap-client (2.4.16) was older then new openldap-server (2.4.21) and the version of client has updated too. OpenLDAP-client works with postfix on this server and after all updates postfix cann't start again. The error when postfix stop|start is: /libexec/ld-elf.so.1: Shared object "libldap-2.4.so.6" not found, required by "postfix" At the category with libraries is libldap-2.4.so.7, but libldap-2.4.so.6 is removed from the server. When I want to deinstall curently version of openldap-client, system write ===> Deinstalling for net/openldap24-client O.K., but when I start "make install" system write: ===> Installing for openldap-sasl-client-2.4.23 ===> openldap-sasl-client-2.4.23 depends on shared library: sasl2.2 - found ===> Generating temporary packing list ===> Checking if net/openldap24-client already installed ===> An older version of net/openldap24-client is already installed (openldap-client-2.4.21) You may wish to ``make deinstall'' and install this port again by ``make reinstall'' to upgrade it properly. If you really wish to overwrite the old port of net/openldap24-client without deleting it first, set the variable "FORCE_PKG_REGISTER" in your environment or the "make install" command line. *** Error code 1 Stop in /usr/ports/net/openldap24-client. *** Error code 1 Stop in /usr/ports/net/openldap24-client. Updating of ports doesn't help, and postfix writes error: /libexec/ld-elf.so.1: Shared object "libldap-2.4.so.6" not found, required by "postfix"

    Read the article

  • Some of my keys are automatically being pressed along with other keys

    - by Santosh
    History The last time when my computer shutdown was a power failure. Now some keys are automatically being pressed when I type something. The last thing I did to keyboard setting was adding a keyboard layout (on Ubuntu). What is happening Whenever I press c, xc is writeen s gives me sd d gives me sd e gives me we 2 gives me 23, So when I want @ it gives me @# 3 gives me 23 Pressing CAPS Lock gives me F3 and vice-versa. All other key are either working fine or I don't use them. I have two operating system Ubuntu and Windows, I use Windows very less and found this problem on Ubuntu, but as soon as I logged in to Windows (for checking) then I found that Windows has the same problem. Effects on my life This starts form the time of login, even I have problem in typing my password. Whenever I try to save any webpage, it is bookmarked automatically. Whenever I copy, it is cut automatically. I have to spend more than half of time correcting what I have typed. Note: Typing thisd quwesdtion wasd rweally a big pain to mwe.

    Read the article

  • How To Monitor Home Wireless Network Connected Devices Bandwith

    - by GWLlosa
    (Originally posted on SuperUser, not sure if it might be better suited here) I have in my home a standard Comcast cable internet connection. I have it going from the wall to a cable modem, and from the modem to a late-series Linksys router, which provides wired and wireless networking. The vast majority of the users are wireless connections. For day-to-day tasks, this connection is fully sufficient for all my needs. However, on regular occassions, we have social gatherings that involve many people bringing laptops and other PCs and using the network and internet simultaneously, frequently for gaming. I have no administrative oversight over these machines; they have been known to be riddled with spyware and/or bloatware or be running torrents, legal or otherwise. The only reason I care is that on a regular basis, one of the machines will flatline my internet bandwith, and consume it all in order to upload/download/spam people/whatever. When this happens, the latency of the connections for gaming and the like becomes unacceptable, and everyone suffers. My question is: Is there a system I can set up whereby I can easily monitor the various systems connected to my wireless connection, see how much bandwith each one is using, and for what ends? That way, at a glance, I can spot the offending machine and kick it from the connection, without having to go from machine to machine, checking each one's "bandwith used" properties manually, and dealing with the owner's indignant protests all the while. I understand this will likely involve 3rd-party software and/or hardware; my issue is I don't even know where to begin.

    Read the article

  • Nginx Server Block Not Working? - Already running other vhosts just this one not working

    - by daveaspinall
    Im running a Debian 6 LEMP server with multiple virtual hosts and everything has been fine for 5 or so sites. But I've just tried adding another but for some reason it's just not working. By not working I mean in Chrome I get the "Oops! Google Chrome could not connect to subdomain.domain.net" error. I've changed the domain for security to subdomain.example.com and the IP is masked. Hosts file (I have multiple sub domains): xxx.xxx.xx.xxx *.example.com *.example Server Block: server { listen 80; server_name subdomain.example.com; access_log /srv/www/subdomain.example.com/logs/access.log; error_log /srv/www/subdomain.example.com/logs/error.log; root /srv/www/subdomain.example.com/public_html; location / { index index.html index.htm index.php; } location ~ \.php$ { include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } I've created the system link to the file in the /etc/nginx/sites-enabled/ directory and restarted/reloaded nginx. DNS seems fine: # ping -c 2 subdomain PING subdomain.example.com (xxx.xxx.xx.xxx) 56(84) bytes of data. 64 bytes from www.example.com (xxx.xxx.xx.xxx): icmp_req=1 ttl=64 time=0.035 ms 64 bytes from www.example.com (xxx.xxx.xx.xxx): icmp_req=2 ttl=64 time=0.048 ms Checking the file with cURL works: # curl http://subdomain.example.com HTML - OK Emptied browser cache but still no dice. Anything I'm missing? Like I mentioned, I have a few sites running fine on the server currently so php-fpm etc etc are working. Any help would be much appreciated! Cheers, Dave

    Read the article

  • Windows 2008 Server can't connect to FTP

    - by stivlo
    I have Windows 2008 Server R2, and I am trying to install FTP services. My problem is I can't connect from outside, FileZilla complains with: Error: Connection timed out Error: Could not connect to server Here is what I did. With the Server Manager, I've installed the Roles FTP Server, FTP Service and FTP Extensibility. In Internet Information Services version 7.5, I've chosen Add FTP Site, enabled Basic Authentication, Allow a user to connect Read and Write. In FTP Firewall support on the main server, just after start page, I've set Data Channel Port Range to 49100-49250 and set the external IP Address as the one I see from outside. If I click on FTP IPv4 Address and Domain Restrictions, and click on Edit Feature Settings, I see that access for unspecified clients is set to Allow, so I click OK without changing those defaults. In FTP SSL Policy, I've set to Require SSL connection, certificate is self signed. I tried to connect with FileZilla from the same host and it works, however it doesn't work remotely, as I said above. I've enabled pfirewall.log, but apparently nothing gets logged. The server is in Amazon EC2, and on the security group inbound firewall rules, I've set that ports 21 and ports 49100-49250 accepts connections from everywhere. What else should I be checking to solve the problem?

    Read the article

< Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >