Search Results

Search found 34397 results on 1376 pages for 'php socket'.

Page 1314/1376 | < Previous Page | 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321  | Next Page >

  • Confused with creating an ODBC connection, apparently I have two separate odbcad32.exe files?

    - by Hoser
    Alright, this is my first time working with this so forgive me if I'm a little confusing or vague. I have a server with Windows Server 2008 Standard without Hyper-v (6.0, Build 6002). I'm running a small website off this server and using a Microsoft Access database to store some information coming in through the website. I'm sure the PHP I have written to open the ODBC connection is correct as it has worked for me when I created this website in a testing environment on a laptop. My current issue now is that it seems like I have two different odbcad32.exe's, and one doesn't appear to have a driver for a .accdb file, and only a .mdb file. The other has a driver for both. The first one I speak of has a driver titled 'Driver do Microsoft Access (.mdb)', the second one has a driver titled 'Microsoft Access Driver (.mdb, .accdb)'. I access the first odbcad32.exe by going to C:\Windows\SysWOW64\odbcad32.exe, and then the one that seems to have the driver I need I go to Control Panel-Administrative Tools-Data Sources(ODBC) and simply create a new connection in the System DNS tab. Whenever I make changes to the one that I access through the Control Panel, I see no changes, however if I use the odbcad32.exe file in SysWOW64 I do get some changes in the errors that come back to me. The main difference I noticed is that when I set up an ODBC connection with the Control Panel method it said it simply couldn't find the ODBC connection, but when I made a .mdb connection in the SysWOW64 one (and pointed it to a .accdb file) it says Cannot open database '(unknown)'. It may not be a database that your application recognizes, or the file may be corrupt. Which makes it seem like it is this odbcad32.exe version in SySWOW64 that is being recognized as the 'correct' one. Is there any way to fix this? I've tried to be as thorough as possible but if I've been confusing or left anything out let me know.

    Read the article

  • solution for an offline server

    - by dashmug
    I'm trying to setup a development server at work that will ideally be able to test drive a couple of projects in PHP, Rails, or Django (not always running at the same time). I develop the apps locally on a Mac and then I'll put the projects up on this server for testing with my actual users (non-techies) before deploying to a production server. My problem is that we have a very poor internet connection (almost negligible) at work and doing the usual apt-get/yum/ports (make, clean, install) processes for setting up servers always get their packages from online repositories somewhere. I know I could probably download the source and then compile them myself but that's going to be too much of a hassle for me. I'm thinking about two solutions: Plan A: Run a server VM on my Mac and then use this VM as the source repository for the offline server. I've read about Ubuntu's apt-proxy and it seems to be good enough though I haven't tried it yet. I'm not sure if this is possible but can I simply do apt-get install nginx --downloadonly so that the package and its dependencies will be downloaded into my VM and my server can use the VM as the source repo for apt-get? Plan B: Run a server VM on my Mac (which I can setup/update easily when I'm home) and then clone the VM to the offline development server. Maybe I should simply make the server a VM host so I can simply copy the VM over. I think this is okay for the first-time setup but subsequent updates will take too long (cloning the VM image). If I was working on Windows, I imagine it'd be easier because most services have an installer file that I can download and then run at the server. If you could suggest another way, it would be much appreciated. Update: From Michael Hampton's answer, I found a possible solution which is apt-cacher. I also found this page on Ubuntu's website. I wonder if there is a better tool than this one.

    Read the article

  • My server cant resolve domains?

    - by Nuker
    I am on a VPS that is pretty much unmanaged so it means im on my own. I did my best to configure it so i can host my own site for other people to see it online but seems like i have network problems because in the last days many of my users report they cant enter my site from my domain and seems like Google and Facebook cant either (this never happened before). Its weird because i can enter my site without problems and so many other people as well. But then i tried to make a php include and i get this error: Warning: include(): php_network_getaddresses: getaddrinfo failed: Name or service not known in I was told that seems like my server cant resolve domains. The includes work if i use IPs instead of domains. So it means i have a DNS problem or something? What can i do to fix it? Im on a Linux 2.6.32-431.11.2.el6.x86_64 on x86_64 CentOS Linux 6.5 Thank you. EDIT: i have this on my resolv.conf # Generated by NetworkManager # No nameservers found; try putting DNS servers into your # ifcfg files in /etc/sysconfig/network-scripts like so: # # DNS1=xxx.xxx.xxx.xxx # DNS2=xxx.xxx.xxx.xxx # DOMAIN=lab.foo.com bar.foo.com nameserver 8.8.8.8 nameserver 8.8.4.4

    Read the article

  • IIS no longer saving session variables

    - by John
    I'm running IIS v7 on a Win7 development machine. I have PHP code that saves session variables and calls them back later. This has been working on this machine for some time. For some reason now, the session variables dissapear immediatly after saving. Code that used to work fine on http://localhost/, suddenly now does not. I have tested different browsers - the vars dissapear regardless of browser. I have tested identical code on different servers. The problem exists only on this development machine. I tried some code that saves a session var, then reads it back and displays it, then shows a link to click on to read it back and display again. What happens is the session var DOES get written and read back and displayed ok. But when you click the link to view it again, it's gone. I don't recall making any changes to IIS. But I did run several malware scanners and clean-up tools. Is anyone aware of any setting in IIS that disallows session vars? Any other throughts?

    Read the article

  • Windows 7: moved system partition, need to update boot partition

    - by Actorclavilis
    So, I have a decently standard Windows7/Ubuntu dual-boot setup, and (since Ubuntu is my usual operating system) I found I needed to grow my Ubuntu partition and shrink my W7 partition. Originally, my system (500G) looked like this: W7 Boot Partition (1.5G) Ubuntu (around 240G) W7 (same as Ubuntu) (on an extended partition, all by itself) Swap (rest of disk, around 16G) Now I'm no stranger to partitioning and filesystem tools, especially GParted, which I used on a Linux boot disk. After my partition editing, the partitions are laid out the same, except the Ubuntu partition is now 407G and the W7 partition is smaller to compensate. I had supposed, based on http://www.gparted.org/faq.php, that I would be able to run the W7 install disk in recovery mode and have it deal with the rearrangement, then possibly reinstall GRUB or something. Well, now the W7 install disk doesn't even see my W7 installation. All my files are there, the NTFS is perfectly clean, no problems there, but the install disk won't notice it. (Of course, the GRUB entry works fine but the W7 boot partition (which I didn't change) refuses to boot it.) So, basically, any ideas on how to fix this? I don't especially want to rerun the entire install procedure because I'll have a bunch of programs to reinstall (never mind redoing GRUB), but I fear that might be the only option. Thanks.

    Read the article

  • Preventing back connect in Cpanel servers

    - by Fernando
    We run a Cpanel server and someone gained access to almost all accounts using the following steps: 1) Gained access to an user account due to weak password. Note: this user didn't had shell access. 2) With this user account, he accessed Cpanel and added a cron task. The cron task was a perl script that connected to his IP and he was able to send back shell commands. 3) Having a non jailed shell, he was able to change content of most websites in server specially for users who set their folders to 777 ( Unfortunately a common recommendation and sometimes a requirement for some PHP softwares ). Is there a way to prevent this? We started by disabling cron in Cpanel interface, but this is not enough. I see a lot of other options in which an user could run this perl script. We have a firewall running and blocking uncommon outgoing ports. But he used port 80 and, well, I can't block this port as a lot of processes use them to access things, even Cpanel itself.

    Read the article

  • Why is /dev/urandom only readable by root since Ubuntu 12.04 and how can I "fix" it?

    - by Joe Hopfgartner
    I used to work with Ubuntu 10.04 templates on a lot of servers. Since changing to 12.04 I have problems that I've now isolated. The /dev/urandom device is only accessible to root. This caused SSL engines, at least in PHP, for example file_get_contents(https://... to fail. It also broke redmine. After a chmod 644 it works fine, but that doesnt stay upon reboot. So my question. why is this? I see no security risk because... i mean.. wanna steal some random data? How can I "fix" it? The servers are isolated and used by only one application, thats why I use openvz. I think about something like a runlevel script or so... but how do I do it efficiently? Maby with dpkg or apt? The same goes vor /dev/shm. in this case i totally understand why its not accessible, but I assume I can "fix" it the same way to fix /dev/urandom

    Read the article

  • Why is "start in" needed for Windows scheduled tasks?

    - by GomoX
    We develop a web application that can be deployed on Windows or Linux. The Linux implementation uses cron, and the Windows one uses scheduled tasks to run a single PHP script that processes all scheduled tasks for our system. The task is scheduled using schtasks during the install process, like: This has always worked both under W2003 and W2008. A week ago a customer reported that scheduled tasks were not running. He is running on Windows 2008. We checked over and over and finally solved the issue by entering the folder that contains the .vbs script as the "start in" folder for the scheduled task. This said, there is no way to set up the "start in..." value from schtasks without using an XML definition of the tasks. XML definitions don't work in Windows 2003, so I would have to add windows version detection to the installer, additional testing, etc (I'd like to avoid this if at all possible). The only atypical thing I noticed about the install is that the system is installed in D:\ as opposed to the default C:\Program Files (x86)\, but I don't see how this would matter. All the paths are absolute in all the scripts. Can anyone suggest a reasonable solution for this?

    Read the article

  • nginx root directory not forwarding correctly

    - by user66700
    The server files are store in /var/www/ Everything was working perfectly, then I've been getting the following errors 2011/01/28 17:20:05 [error] 15415#0: *1117703 "/var/www/https:/secure.domain.com/index.html" is not found (2: No such file or directory), client: 119.110.28.211, server: secure.domain.com, request: "HEAD /https://secure.domain.com/ HTTP/1.1", host: "secure.domain.com" Heres my config: server { server_name secure.domain.com; listen 443; listen [::]:443 default ipv6only=on; gzip on; gzip_comp_level 1; gzip_types text/plain text/html text/css application/x-javascript text/xml text/javascript; error_log logs/ssl.error.log; gzip_static on; gzip_http_version 1.1; gzip_proxied any; gzip_disable "msie6"; gzip_vary on; ssl on; ssl_ciphers RC4:ALL:-LOW:-EXPORT:!ADH:!MD5; keepalive_timeout 0; ssl_certificate /root/server.pem; ssl_certificate_key /root/ssl.key; location / { root /var/www; index index.html index.htm index.php; } }

    Read the article

  • vi and emacs: comparison? (not flamebait!)

    - by jared
    So, I've been enjoying learning and using vi for the last couple of years. The beauty of vi, for me, is that its UI is a language of movement and action with a very uniform, simple grammar, and which is terse enough that the requisite memorization pays ample dividends in how much more I enjoy working with text (by avoiding boring repetition and eliminating micro-hassles, like that half-second annoying wait while you scroll down the screen). (Note--I don't claim to have expert knowledge of vi, but I get around decently well: comfortable with limited '@' macros and regexp search-and-replace within files; frequently use multiple buffers, tabs, and windows; get around pretty well in the file browser; understand the grammar of actions + movement + subject (as described so aptly in this beautiful SO answer); and had some pretty sweet debugger and ctags integration going with PHP.) I wonder if some emacs folks could take a swing at explaining what emacs does brilliantly, or sum its strengths up in a phrase or two. Spare me the talk about productivity; I'm more interested in conceptual clarity. Lisp-centric answers are okay; I'm learning Scheme on the weekends, and would pick up emacs for that alone (have been using Racket).

    Read the article

  • How to fix "The connection was restarted"?

    - by Altar
    I have a php script with ajax. Few days ago, without making any changes, the script stopped working. The script is working sometimes but it is very slow. Other times it doesn't load completely. Yet other times it loads and empty page or it shows a "The connection was restarted" message. We have tried loading the page from 4 different computers in two different countries (two different ISPs). There is nothing relevant in the server's log. We contacted iPage.com hosting support. We sent screenshots. And all we manage to get from them was an incomplete screenshot and the message 'We tested. The page loads'. So, they won't admit the fault. It looks like they loaded the page once and declared it is working and went back to playing solitaire. I know their support. We had all kind of problems with iPage hosting. My questions are: 1. There is any way to make this error more easy to reproduce? I mean instead of fixing the error to get it appear more often. 2. There is any way to test a server response, to see if it drops connections?

    Read the article

  • Cloned Centos 6.4 websrver for test purpose. Virtual host, .htaccess, redirecting url issue

    - by Shogoot
    I see similar questions, but not my exact challenge. What I have done so far I cloned a prod server over to a vmware to use it as a test server for new functionality I'm going to write. I'm not a sysadmin by trade, but I'm new to this company and I have to do some thing that are outside of my comfort zone (thats a good thing :) ) The prod server has 2 sites on it s1.com and s2.com. In /html/s1/, /html/s2/ there's an .htaccess file under each s*/. Looking like this: RewriteEngine ON RewriteBase / RewriteCond %{QUERY_STRING} id=([0-9]+) RewriteRule ^.* %1.htm RewriteCond %{QUERY_STRING} page=modules/checkout RewriteRule ^.* order.php RewriteCond %{QUERY_STRING} page=pages/sidekart RewriteRule ^.* pages/sidekart.htm The issue is that s1 has a lot of pages that really belongs under a third domain s3, the rule in line 4 and 5 redirects them to /html/s1/. An example of such URL is: s3.com/?page=modules/product&id=521614 I'm trying then to get those URLs (without modifying the URL) to redirect to s3's /html/s3/ server structure, which I set up making a new virtualhost s3 in test servers httpd.conf with a test3.com as servername and changing the other sites to tests1.com and tests2.com, and adding .htaccess also to this s3 root directory, and making a html/s3/ directory structure I populated with an index.html, etc. But, when I take the same URL (s3.com/?page=modules/product&id=521614) changing it to tests3.com/?page=modules/product&id=521614, I get s1's index page showing up in my browser. I've poked around about a day now and i cant figure out why this happens.

    Read the article

  • How to make lighttpd respect X-Forwarded-Proto when constructing redirects for directories?

    - by Tim Landscheidt
    We have an nginx proxy at tools.wmflabs.org that receives requests by http and https and passes them by http on to lighttpds on a grid (one lighttpd per top-level path). Requests that reach the proxy by https are received by the lighttpds like this: HEAD /lighttpd-test/test HTTP/1.1 Connection: close Host: tools.wmflabs.org X-Forwarded-Proto: https X-Original-URI: /lighttpd-test/test User-Agent: curl/7.29.0 Accept: */* This works great except in the case where the URL references a physical directory and misses the trailing slash ("/"), as lighttpd then generates a redirect to the http URL: HTTP/1.1 301 Moved Permanently Location: http://tools.wmflabs.org/lighttpd-test/test/ Connection: close Date: Fri, 06 Jun 2014 14:50:29 GMT Server: lighttpd/1.4.28 The relevant parts of our lighttpd configurations are: server.modules = ( "mod_setenv", "mod_access", "mod_accesslog", "mod_alias", "mod_compress", "mod_redirect", "mod_rewrite", "mod_fastcgi", "mod_cgi", ) server.port = $port [...] server.document-root = "$home/public_html" [...] server.follow-symlink = "enable" [...] server.stat-cache-engine = "fam" ssl.engine = "disable" alias.url = ( "/$tool" => "$home/public_html/" ) index-file.names = ( "index.php", "index.html", "index.htm" ) dir-listing.encoding = "utf-8" server.dir-listing = "disable" url.access-deny = ( "~", ".inc" ) [...] How can I make lighttpd respect X-Forwarded-Proto and use it when constructing redirects for directories? I'm aware that I could try to tackle this in nginx, but I'd prefer if I can fix it in lighttpd.

    Read the article

  • How to keep multiple servers in sync file wise?

    - by GForceSys
    I'm currently managing a cluster of PHP-FPM servers, all of which tend to get out of sync with each other. The application that I'm using on top of the app servers (Magento) allows for admins to modify various files on the system, but now that the site is in a clustered set up modifying a file only modifies it on a single instance (on one of the app servers) of the various machines in the cluster. Is there an open-source application for Linux that may allow me to keep all of these servers in sync? I have no problem with creating a small VM instance that can listen for changes from machines to sync. In theory, the perfect application would have small clients that run on each machine to be synced, which would talk to the master server which would then decide how/what to sync from each machine. I have already examined the possibilities of running a centralized file server, but unfortunately my app servers are spread out between EC2 and physical machines, which makes this unfeasible. As there are multiple app servers (some of which are dynamically created depending on the load of the site), simply setting up a rsync cron job is not efficient as the cron job would have to be modified on each machine to send files to every other machine in the cluster, and that would just be a whole bunch of unnecessary data transfers/ssh connections.

    Read the article

  • Unwanted blank lines when committing from SVN

    - by Alon_A
    I'm using CentOS Linux 5.8 as a web server and tortoise SVN for synchronizing version of our code. We write the code in Windows 7 professional 64BIT with NetBeans and NotePad++. I'm committing the code files (.php) from the Linux Shell by this command: svn co svnFolder serverFolder --username **** --password **** The problem is, after committing the files, when I'm opening them directly from the server (for debugging) by NotePad++ (I'm doing View/Edit from Filezilla) I have extra blank lines. A code that looks like this on the localhost (On NotePad++): private $producer; private $account; private $admin; private $producerEvents; private $accountProducers; private $adminAccounts; Will look like this after committing to the server (Again, on NotePad++): private $producer; private $account; private $admin; private $producerEvents; private $accountProducers; private $adminAccounts; If I upload files by FTP, No blank lines are being added. How can I solve it ? Thanks.

    Read the article

  • What method of MySQL mirroring should I use for this?

    - by user45745
    I'm running an web application hosting service (basically hosting forums for free), and I have two remote servers at my disposal. The code for the application is stored on both servers and isn't a problem, but I'm wondering how to deal with the databases. When someone goes onto a site *.example-host.com, they are sent to one of the two servers and both must be capable of loading the forums from a database. The database must also have write access, for when new members register or post topics etc. The main requirement is speed, but uptime is also important (if a server goes out, the site should still work). I have a few options, but I'm inexperienced and not sure which to go with: 1) [PHP] Split the forum records 50:50 between the two servers. If a server does not have the record for a forum requested, it can request it from the other by remote MySQL and load it. This idea sounded okay, until I realised that 50% of the time, users would be waiting significantly longer for pages to load. I also realised that if one of the servers went down, half the forums would be inaccessible and registrations would have to be disabled. 2) [MySQL] Dual master replication. This would attempt to mirror the two databases and sounds perfect, but I've heard that it can be very problematic. I don't know how fast this is. 3) [MySQL] Use a standard replication, distribute read only queries on both nodes and read/write queries to the master. This sounds like a good option, but again, I'm not sure on speed. I also don't know what would happen if the master server went down. If you have any other suggestions, please post them :)

    Read the article

  • Winodws server 2003 Setup

    - by Barracksbuilder
    I work at a university maintaining the computer science department server. I am looking for a more economical way to stream line the set up of student accounts. CS students are granted a Username and password an IIS virtual directory, FTP virtual directory, and a mysql database. Server is running windows server 2003R2 (Possibly migrating to 2008R2) The server is running a domain though no students physically log a terminal into it (No computers are part of my domain.) Creating the account is a manual process. I did right a PHP script to query the Universities AD and copy the information and write it to my AD. I then have to create basically the users home directory. I tried having AD do it but since the user never physically logs in it never creates the directory. Permissions on this folder are set to User - full, Instructors (group) - full, Users (group) - read, IUSER - read. Inside of the users folder their is a "Private" folder with permissions User - full, instructors (group) - full. Next step is IIS I create a virtual directory in the default web site pointed to the users home directory so they have a website. Same goes for FTP virtual directory in the default ftp configuration to allow the users to upload files to their website. Mysql I have to create a user and password then create a mysql scheme (database) full access for the user and full access to the instructors account to be able to access the students database. All of this is done manually and takes me a week to do. The closest description is maybe a shared hosting environment. Is there a better way to do this? Scripting wise, or better structure setup?

    Read the article

  • Set document root for external subdomain (A Record) via htaccess

    - by 1nsane
    I have a managed server (unable to control apache settings) with the default document root of: /var/www I have a web app running in: /var/www/subdomains/app/webroot I have a dedicated domain managed by the host that has the aforementioned webroot which works perfectly. I would like to allow externally provisioned domains to point to the server/web app via A Record config. If I access the site via IP, it takes me to the index located in /var/www. I would like to configure the .htaccess in my /var/www directory to rewrite requests from the external subdomain to the /var/www/subdomains/app/webroot directory. I've done so using the following rules: RewriteCond %{HTTP_HOST} external\.domain\.com$ [NC] RewriteRule ^(.*)$ /var/www/subdomains/app/webroot/index.php?url=$1 [L,QSA] When accessing external.domain.com, the app loads properly, but the paths to things like CSS files, images, etc. are prefixed with "/subdomains/app/", causing broken links. I've tried changing the RewriteBase (both in /var/www and /var/www/subdomains/app/webroot), as I believe that's what it's designed for - but no luck. Any ideas? FYI the app is built on CakePHP. Thanks

    Read the article

  • Strategy to isolate multiple nginx ssl apps with single domain via suburi's?

    - by icpu
    Warning: so far I have only learnt how to use nginx to serve apps with their own domain and server block. But I think its time to dive a little deeper. To mitigate the need for multiple SSL certificates or expensive wildcard certificates I would like to serve multiple apps (e.g. rails apps, php apps, node.js apps) from one nginx server_name. e.g. rooturl/railsapp rooturl/nodejsapp rooturl/phpshop rooturl/phpblog I am unsure on ideal strategy. Some examples I have seen and or thought about: Multiple location rules, this seems to cause conflicts between the individual app config requirements, e.g. differing rewrite and access requirements Isolated apps by backend internal port, is this possible? Each port routing to its own config? So config is isolated and can be bespoke to app requirements. Reverse proxy, I am little ignorant of how this works, is this what I need to research? is this actually 2 above? Help online seems to always proxy to another server e.g apache What is an effective way to isolate config requirements for apps served from a single domain via sub uris?

    Read the article

  • Proper Rules For SSL Redirect For Subdomains

    - by Zac Cleaves
    RewriteCond %{HTTP_HOST} ^(.*\.)*subexample.example.com$ [NC] RewriteCond %{SERVER_PORT} !^443$ RewriteRule ^(.*)$ https://subexample.example.com/$1 [R] Is what I have been using. It works as long as I go to a specific page, like subexample.example.com/orders.php. But if you try to go to the root of the subdomain, it adds the extra "/example" at the end. Any suggestion on a set of rules that will work? Thank you so much for your responces! Actually, this is what I am trying to do: http://support.mydomain.net >> https://support.mydomain.net AND(!) http://support.mydomain.net/anypage* >> https://support.mydomain.net/anypage* > RewriteCond %{HTTPS} off RewriteRule (.*) > https://%{HTTP_HOST}%{REQUEST_URI} Works, except I need it to only do it for the support.mydomain.net With the above set up, you get a certificate warning if you try to go just mydomain.net, which I do not have or need an SSL certificate installed on. UPDATE! The other issue with the rule I have written above, is that if you try to go to the root of the subdomain (i.e. support.mydomain.net) it goes to https://support.mydomain.net/support This is driving me crazy, help! =) Any help would be greatly appreciated!

    Read the article

  • Building a small server farm

    - by RayQuang
    Hi, I am planning to set up a Tech startup company that will provide web application solutions. Eventually we hope to diversify into different areas such as possibly social media or other services. For now we plan on running a high demand (from 1000 to 10,000 users in the first year) website running the application. This includes a MySQL database backend, email, and development servers. My question is then, what type of server arrangement will work best, that is ti say should i have a small cluster of ultra high power machines (E.G. Top of the range Xeons, with 12GB RAM) or will it be better to have more less powerful servers load balanced? Should I go for 1 - 2 u servers rack mounted ot would it be beter for it just to be tower servers for maintainability? Finally I would also like to know what kind of Internet and router i would need, I currently have 10mbit down and barely 1 mbit up, but soon our area will have a fiber optic connection with international speeds of up to 25 mbit / sec. Thanks in advance, RayQuang UPDATE: sorry I forgot to mention it, the platform that I will be using is PHP with the APC code cache, Probably running Debian.

    Read the article

  • Very high Magento/Apache memory usage even without visitors (are we fooled by our hosting company?)

    - by MrDobalina
    I am no server guy and we have issues with our speed so I come here asking for advise. We have a VPS with 2 cores and 2gb of RAM at a Magento specialized hosting company. Over the course of the last weeks our site speed has gotten worse, even though our store is new, has less than 1000 SKUs and not even 100 visitos a day. At magespeedtest.com we only get 1.87 trans/sec @ 2.11 secs each with a mere 5 concurrent users. Our magento log files are clean, we have no huge database tables or anything like that. When we take a look at our server real time stats, we see that the memory usage jumped up from about 34% to 71% and now 82% in just a few days in idle, with no visitors on the site. Our hosting company said that we do not need to worry about that as it`s maybe related to mysql which creates buffers (which are maybe not even actually being used) and what is important is CPU and swap - stats are ok here. They also said that the low benchmark scores are caused by bad extensions or template modifications on our side. We are not sure if we can trust that statement as we only have 4 plugins installed (all from aheadworks and amasty which are known to be one of the best magento extension developers). Our template modifications are purely html and css, no modifications to the php code. Our pagespeed is ranked with 93/100 in firebug and Magento is properly configured, so the problem really just gets obvious when there are a handful of users on the site at the same time. Can anyone confirm our hosting`s statement about memory usage and where can I start looking for a solution?

    Read the article

  • SSL and regular VHost on the same server [duplicate]

    - by Pascal Boutin
    This question already has an answer here: How to stop HTTPS requests for non-ssl-enabled virtual hosts from going to the first ssl-enabled virtualhost (Apache-SNI) 1 answer I have a server running Apache 2.4 on which run several virtual hosts. The problem I noticed is that if I try to access let's say https://example.com that have no SSL setuped, apache will automatically try to access the first VHost that has SSL activated (which is litteraly not the same site). How can we prevent this strange behaviour, or in other words, how to say to Apache to ignore SSL for a given site. Here's sample of what my .conf files look like : <VirtualHost foobar.com:80> DocumentRoot /somepath/foobar.com <Directory /somepath/foobar.com> Options -Indexes Require all granted DirectoryIndex index.php AllowOverride All </Directory> ServerName foobar.com ServerAlias www.foobar.com </VirtualHost> <VirtualHost test.example.com:443> DocumentRoot /somepath/ <Directory /somepath/> Options -Indexes Require all granted AllowOverride All </Directory> ServerName test.example.com SSLEngine on SSLCertificateFile [­...] SSLCertificateKeyFile [­...] SSLCertificateChainFile [­...] </VirtualHost> With this, if I try to access https://foobar.com chrome will show me a SSL error that mention that the server was identifying itself as test.example.org Thanks in advance !

    Read the article

  • Help - since adding an elastic load balancer to my EC2 web application I cannot connect with the MySQL database (not in AWS)

    - by undefined
    I have a web application that uses an EC2 instance to receive uploaded images, resize and store on S3 and update my MySQL database with the image record. This database is hosted outside Amazon Web Services and so obviously involves communication between the EC2 instance and the database. Images are posted to the upload server from a Flash client which receives the IP address of the upload server when it is loaded and so sends images to 1.12.23.34/resize_script.php This has worked great .. until i started to try and include a load balancer. Since the ELBs do not use an IP address but a DNS address I am now passing this to Flash. Now when I upload images I get the following response from the server - Could not connect to MySQL: Lost connection to MySQL server at 'reading initial communication packet', system error: 111 What might be causing the lost connection to MySQL server. Is there any additional steps I need to take to allow my upload servers to be load balanced? I have set the host property of my MySQL privileges for this user to % any pointers greatly appreciated thanks.

    Read the article

  • [deb-5.0] Setup DNS on my server so I can put my IPs in as nameservers of my domain provider

    - by Maurycy Zarzycki
    Basically, my unmanaged VPS provider doesn't supply me with nameserver which I can use with my domain provider to route domain to my server. As I've been told: You need to configure the custom DNS server in your VPS, to setup the custom nameservers. Please refer the following article that would help: http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:_Ch18_:_Configuring_DNS Once you configure the nameserver records, please update the domain registrar panel with the custom nameserver details. I tried to follow this guide but it seems to be a bit outdated, and I am complete newb with non-windows systems. I also scanned the google for other articles which could help me with this problem but, alas, nothing I found was of any value for someone who doesn't know this stuff better than his own pockets. I realize this is quite a complex thing to do, but maybe there is some way to automate it? Or a better solution, like a paid service which would act as my nameservers (this one would be interesting), or even hoped to find some company which "rents" people to do stuff like that. Blah, any help will be appreciated, I am at a complete loss here. I can follow some of these steps, but then I soon find that half of the files which are mentioned in the article are somehow not existing anywhere on the server which confuses me, and once we get to the point of creating Zone I can't really decipher all the things written there :/. As per title, my system is Debian 5.0.

    Read the article

< Previous Page | 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321  | Next Page >