Search Results

Search found 109878 results on 4396 pages for 'server side objects to client side'.

Page 1606/4396 | < Previous Page | 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613  | Next Page >

  • First time setting up a MySQL database.

    - by Wilduck
    In trying to learn how to work with the LAMP stack, I've hit a wall with MySQL. I can't seem to find a good reference for the first time setup of MySQL to be used with Apache and python. So, my question is four-fold: 1) Under what circumstances should I create my first database. That is, what user do I use (Apache's http user? root?) 2)How do permissions work? 3) Do I have to do anything on the MySQL side to make MySQL talk to Apache, or MySQL to talk to Python/Django? 4) Is there a good resource online that describes setting all of this up? I've found a bunch for using a database once it's in place, but none for the initial setup? Notes: I'm trying to run my LAMP stack on a dedicated little box for testing/learning purposes only, so I don't have access to any DBA that could help me, as much as I'd like one.

    Read the article

  • Exchange 2013 - DNS Records for Accepting Multiple Domains

    - by William
    I have an Exchange 2013 server accepting two domains: domain1.com and domain2.com. All of the exchange services (OWA, ECP, POP3, SMTP, etc.) can be found via the address mail.domain1.com. So, in the DNS records for domain1, I have the following entries: MX Record mail.domain1.com A Record mail.domain1.com - (IP Address of Server) CNAME Record autodiscover.domain1.com - mail.domain1.com Now, for domain2.com, how would I set up the DNS records? Would I have the autodiscover just be a cname for autodiscover.domain1.com? Would this allow me to leverage the certificates that I have installed for domain1?

    Read the article

  • What possible events could cause a MySQL database to revert to a previous state?

    - by justkevin
    A client of mine recently had a strange event with their MySQL database. Several days ago, one database suddenly "went back in time". All the data was in the state it was in several months ago. Even most of the .MYD and .MYI files had timestamps from November. Fortunately, the server is not in production yet, but we need to understand how it happened so it doesn't happen again. I'm not a MySQL guru, but I couldn't think of a scenario that could cause the database to rewind like that short of restoring from a backup. What could have happened here? Where should I look for clues? (Server is FreeBSD 6.4)

    Read the article

  • Forwarding ports with ssh on Linux

    - by Patrick Klingemann
    I have a database server, let's call it: dbserver I have a web server with access to my dbserver, let's call it: webserver I have a development machine that I'd like to use to access a database on dbserver, let's call it: dev dbserver has a firewall rule set to allow TCP requests from webserver to dbserver:1433 I'd like to set up a tunnel from dev:1433 to dbserver:1433, so all requests to 1433 on dev are passed along to dbserver:1433 My sshd_config on webserver has the following rules set: AllowTcpForwarding yes GatewayPorts yes This is what I've tried: ssh -v -L localhost:1433:dbserver:1433 webserver In another terminal: telnet localhost 1433 Results in: Trying ::1... Connected to localhost. Escape character is '^]'. Connection closed by foreign host. Any idea what I'm doing wrong here? Thanks in advance!

    Read the article

  • Setting up Shibboleth to secure part of a website

    - by HorusKol
    I've installed the Shibboleth module for apache on Ubuntu 10.04 using aptitude to install libapache2-mod-shib2 as per https://groups.google.com/group/shibboleth-users/browse_thread/thread/9fca3b2af04d5ca8?pli=1 and enabled the module (I have checked in /etc/apache2/mods-enabled) I then proceeded to secure a directory on the server by placing a .htaccess file with the following directives: AuthType shibboleth ShibRequestSetting requireSession 1 Require valid-user Now - I haven't set up an SSL host yet - and I also haven't set up the IdP - but I would expect that the server would block access to this directory - but I'm getting the content without any problems. I have restarted the apache service and I have no errors in the log files.

    Read the article

  • Cisco ASA 5505 (8.05): asymmetrical group-policy filter on an L2L IPSec tunnel

    - by gravyface
    I'm trying to find a way to setup a bi-directional L2L IPSec tunnel, but with differing group-policy filter ACLs for both sides. I have the following filter ACL setup, applied, and working on my tunnel-group: access-list ACME_FILTER extended permit tcp host 10.0.0.254 host 192.168.0.20 eq 22 access-list ACME_FILTER extended permit icmp host 10.0.0.254 host 192.168.0.20 According to the docs, VPN filters are bi-directional, you always specify the remote host first (10.0.0.254), followed by the local host and (optionally) port number, as per the documentation. However, I do not want the remote host to be able to access my local host's TCP port 22 (SSH) because there's no requirement to do so -- there's only a requirement for my host to access the remote host's SFTP server, not vice-versa. But since these filter ACLs are bidirectional, line 1 is also permitting the remote host to access my host's SSH Server. The documentation I'm reading doesn't seem to clear to me if this is possible; help/clarification much appreciated.

    Read the article

  • Why is scp not overwriting my destination file?

    - by Noli
    I'm trying to back up a file via the command scp /tmp/backup.tar.gz hostname:/home/user/backup.tar.gz When I run it, the scp progress bar shows up and it looks like its transferring the file, however when I log into the destination server to check the file, the timestamp and filesize haven't changed from the older version, so it looks like scp didn't overwrite the old file at all. It only sees to work when I manually delete the file from the destination server. I'm running ubuntu, and this is happening on two servers: one cygwin ssh, and one fedora core 3. Anyone have any idea why this is happening? I thought scp would ONLY overwrite existing files.. Thanks

    Read the article

  • Postfix sends email to spam (gmail, hotmail)

    - by razorxan
    I recently installed a postfix + dovecot + dkim multi domain, multi user, multi alias mail server on my debian squeeze system. Everything works except for one big issue that basically makes the whole thing useless: Every single email sent by my server goes straight into spam. (gmail, hotmail) First thing i did is doing the well known allaboutspam test and all is checked (green) except for the BATV thing (yellow): Reverse dns: green HELO Greeting: green RBL: green BATV: yellow SPF: green DKIM: green URIBL: green SPAMAssassin: green Greylist: green I'm really confused and i can't see a way to solve this issue. Ask me any detail if you need.

    Read the article

  • SMTP hacked by spammer using base64 encoding to authenticate

    - by Throlkim
    Over the past day we've detected someone from China using our server to send spam email. It's very likely that he's using a weak username/password to access our SMTP server, but the problem is that he appears to be using base64 encoding to prevent us from finding out which account he's using. Here's an example from the maillog: May 5 05:52:15 195396-app3 smtp_auth: SMTP connect from (null)@193.14.55.59.broad.gz.jx.dynamic.163data.com.cn [59.55.14.193] May 5 05:52:15 195396-app3 smtp_auth: smtp_auth: SMTP user info : logged in from (null)@193.14.55.59.broad.gz.jx.dynamic.163data.com.cn [59.55.14.193] Is there any way to detect which account it is that he's using?

    Read the article

  • How to run "mongodb --repair" if it's an Upstart job?

    - by Wolfram Arnold
    My MongoDB server died. The log says something about an unclean shutdown and an existing mongodb.lock file. It recommends to remove the lock file, then restart the mongodb server with --repair. However, on my system (Ubuntu 10.10), I've installed MongoDB via an apt-get package, and it's set up as Upstart job. If I run mongodb from the command line, it won't find the data, none of the paths are set correctly. Surely, I could read the man page, try to emulate what Upstart would do, give it all the correct parameters plus --repair but that seems like a lot of trouble. There must be a simpler way, that's not fighting Upstart. What is it?

    Read the article

  • Perform action based on load avg

    - by sfx
    I'm running some web applications on an debian server and have to struggle with ddos attacks sometimes. It's eating up all my resources and I can't ssh anymore into the server. An idea was to drop all connections if the load avg is too high, so there are still resources for me and accept new connections if the load avg is low enough. Since this has to work under heavy load I'm afraid a cronjob wouldn't be fast enough or take too much resources. tl;dr: Is there a way to configure the behavior if the load avg is above a specific threshold?

    Read the article

  • PBX with Fax and Google Voice

    - by Phill Pafford
    Looking to replace/port my home number which I use for mainly faxing for my home business to a PBX server ( Thinking Asterisk or Elastix ). My question is: Does Asterisk/Elastix support Faxing ( Incoming / Outgoing ) Does Asterisk/Elastix support Google Voice Here is what I'm looking to do: Run some sort of PBX software from my own home server that will allow me to use Google Voice for my home number, possibly allow multiple Google voice ( Though I could live with just the one ) and must support Faxing ( Incoming and Outgoing ). Would Asterisk/Elastix support all of this or would you recommend something else for this? Looking to avoid some of the pitfalls that could happen I like Ubuntu if a Linux environment is needed

    Read the article

  • htaccess IP blocking with custom 403 Error not working

    - by mrc0der
    I'm trying to block everyone but 1 IP address from my site on a server running apache & centos. My setup is follows the example below. My server: `http://www.myserver.com/` My .htaccess file <limit GET> order deny,allow deny from all allow from 176.219.192.141 </limit> ErrorDocument 403 http://www.google.com ErrorDocument 404 http://www.google.com When I visit http://www.myserver.com/ from an invalid IP, it gives me a generic 403 error. When I visit http://www.myserver.com/page-does-not-exist/ it redirects me correctly to http://www.google.com but I can't figure out why the 403 error doesn't redirect me too. Anyone have any ideas?

    Read the article

  • Troubleshooting intermittantly long 'page loading' problem

    - by justSteve
    Working off IIS 7 with an asp.net mvc app. Most page accesses are lightening quick. Sometimes the browser reports 'page loading' for an exceptionally long time. We are still in the development/testing mode so it's not a network/bandwidth issue. Happens in both IE and FF. No explicit error conditions in the server logs. If i could reproduce it i could run firebug/fiddler to give more info. As it is, it's a little too infrequent to simply leave both those running hoping for the condition to fire - unless/until that's the only option to get better info. I have a hunch this is client-side jquery/ajax related but only a hunch - don't have enough background in either jQuery or MVC to really know what can go wrong. Any initial troubleshooting suggestions welcome. thx

    Read the article

  • FreeNas running on ESXi - sometimes gets very slugish.

    - by Luma
    Hello everyone, I have a ESXi server (dual quad core, 8GB of DDR3 ram, 6x 1TB WD Blacks running in RAid 5 on the PErc 6/i controller. I have a 64bit freenas VM running, on this VM I keep about 200Gigs of stuff that my windows machines access. every now and then the throughput of this VM just dies, for example right now it can't even handle streaming a song and when I tried to transfer a folder the speed goes from 10-400KB/s. Might I add at this point that the ESXi box has dual gigabit network cards plugged into a good solid gigabit switch and other linux and windows VM's are just fine I have seen speeds over 90MB/s (frequently) The server still has ram left over (plenty actually) and cpu is very low (500-1000mhz) any ideas what could cause this? thanks. Luc

    Read the article

  • Blocking a country (mass iP Ranges), best practice for the actual block

    - by kwiksand
    Hi all, This question has obviously been asked many times in many different forms, but I can't find an actual answer to the specific plan I've got. We run a popular European Commercial deals site, and are getting a large amount of incoming registrations/traffic from countries who cannot even take part in the deals we offer (and many of the retailers aren't even known outside Western Europe). I've identified the problem area to block a lot of this traffic, but (as expected) there are thousands of ip ranges required. My question now (finally!). On a test server, I created a script to block each range within iptables, but the amount of time it took to add the rules was large, and then iptables was unresponsive after this (especially when attempting a iptables -L). What is the most efficient way of blocking large numbers of ip ranges: iptables? Or a plugin where I can preload them efficiantly? hosts.deny? .htaccess (nasty as I'd be running it in apache on every load balanced web server)? Cheers

    Read the article

  • Out Of Memory Error - Magento

    - by robobobobo
    Ok normally I understand when my server is giving me out of memory errors, but this one has me stumped! I'm running a magento based site, with one or two plugins in it and the rest is pretty basic. The site runs and loads fine wiht no issues. However in the backend - Configuration - Payment Methods it gives me the following out of memory error Fatal error: Out of memory (allocated 39059456) (tried to allocate 85 bytes) in ########/Varien/Simplexml/Element.php on line 84 Now this is where I'm confused..it's allocated more than it tried to allocate? Am I correct there? So how is it running out of memory? My server has 6Gb ram, an SSD and 2 CPU's running WHM with a few other low traffic sites on it. I set my php memory limit to 100mb, 1000mb and finally unlimited but all to no avail! I'm completely lost here, would really appreciate some expertise on this Cheers

    Read the article

  • Limit disk I/O one program creates?

    - by Posipiet
    Hardware: one virtualization server. Dual Nehalem, 24GB RAM, 2 TB mirrored HD. Software: Debian, KVM, virt-manager on the server with several virtual machines that use Linux too. 2 TB Disk is a big LVM, each VM gets a logical volume and makes its own partitions in that. Problem: One of the programs that runs on one of the VMs creates huge disk load. This never was an issue, because the program never ran on such a powerful hardware. Now the CPUs are fast, and lots of I/O is the result. We cant do much against that at the moment, because the tool is a black box. On the other hand, the speedy computation is welcome. The program creates about 5 GB of temp files which get overwritten during the next iteration. Question: How can we limit the disk I/O for the process?

    Read the article

  • How to authenticate users in nested groups in Apache LDAP?

    - by mark
    I've working LDAP authentication with the following setup AuthName "whatever" AuthType Basic AuthBasicProvider ldap AuthLDAPUrl "ldap://server/OU=SBSUsers,OU=Users,OU=MyBusiness,DC=company,DC=local?sAMAccountName?sub?(objectClass=*)" Require ldap-group CN=MySpecificGroup,OU=Security Groups,OU=MyBusiness,DC=company,DC=local This works, however I've to put all users I want to authenticate into MySpecificGroup. But on LDAP server I've configured that MySpecificGroup also contains the group MyOtherGroup with another list of users. But those users in MyOtherGroup are not authenticated, I've to manually add them all to MySpecificGroup and basically can't use the nested grouping. I'm using Windows SBS 2003. Is there a way to configure Apache LDAP to do this? Or is there a problem with possible infinite recursion and thus not allowed?

    Read the article

  • Multicast hostname lookups on OSX

    - by KARASZI István
    I have a problem with hostname lookups on my OSX computer. According to Apple's HK3473 document it says for v10.6: Host names that contain only one label in addition to local, for example "My-Computer.local", are resolved using Multicast DNS (Bonjour) by default. Host names that contain two or more labels in addition to local, for example "server.domain.local", are resolved using a DNS server by default. Which is not true as my testing. If I try to open a connection on my local computer to a remote port: telnet example.domain.local 22 then it will lookup the IP address with multicast DNS next to the A and AAAA lookups. This causes a two seconds lookup timeout on every lookup. Which is a lot! When I try with IPv4 only then it won't use the multicast queries to fetch the remote address just the simple A queries. telnet -4 example.domain.local 22 When I try with IPv6 only: telnet -6 example.domain.local 22 then it will lookup with multicast DNS and AAAA again, and the 2 seconds timeout delay occurs again. I've tried to create a resolver entry to my /etc/resolver/domain.local, and /etc/resolver/local.1, but none of them was working. Is there any way to disable this multicast lookups for the "two or more label addition to local" domains, or simply disable it for the selected subdomain (domain.local)? Thank you! Update #1 Thanks @mralexgray for the scutil --dns command, now I can see my domain in the list, but it's late in the order: DNS configuration resolver #1 domain : adverticum.lan nameserver[0] : 192.168.1.1 order : 200000 resolver #2 domain : local options : mdns timeout : 2 order : 300000 resolver #3 domain : 254.169.in-addr.arpa options : mdns timeout : 2 order : 300200 resolver #4 domain : 8.e.f.ip6.arpa options : mdns timeout : 2 order : 300400 resolver #5 domain : 9.e.f.ip6.arpa options : mdns timeout : 2 order : 300600 resolver #6 domain : a.e.f.ip6.arpa options : mdns timeout : 2 order : 300800 resolver #7 domain : b.e.f.ip6.arpa options : mdns timeout : 2 order : 301000 resolver #8 domain : domain.local nameserver[0] : 192.168.1.1 order : 200001 Maybe it would work if I could move the resolver #8 to the position #2. Update #2 No probably won't work because the local DNS server on 192.168.1.1 answering for domain.local requests and it's before the mDNS (resolver #2). Update #3 I could decrease the mDNS timeout in /System/Library/SystemConfiguration/IPMonitor.bundle/Contents/Info.plist file, which speeds up the lookups a little, but this is not the solution.

    Read the article

  • Deployment of broadband network

    - by sthustfo
    Hi all, My query is related to broadband network deployment. I have a DSL modem connection provided by my operator. Now the DSL modem has a built-in NAT and DHCP server, hence it allocates IP addresses to any client devices (laptops, PC, mobile) that connect to it. However, the DSL modem also gets a public IP address X that is provisioned by the operator. My question is Whether this IP address X provisioned by operator is an IP address that is directly on the public Internet? Is it likely (practical scenario) that my broadband operator will put in one more NAT+DHCP server and provide IP addresses to all the modems within his broadband network. In this case, the IP addresses allotted to the modem devices will not be directly on the public Internet. Thanks in advance.

    Read the article

  • How can one send commands to the "inner" ssh session?

    - by iconoclast
    Picture a scenario where I'm logged into a server (which we'll call "Wallace") from my local machine, and from there I ssh into another server (which we'll call "Gromit"): laptop ---ssh---> Wallace ---ssh---> Gromit Then the ssh session from Wallace to Gromit hangs, and I want to kill it. If I enter ~. to kill ssh, it kills the ssh session from my laptop to Wallace, because the ~ is intercepted by that ssh session, and the . is taken as a command to kill the session. How do I send a command to the ssh session between Wallace and Gromit? How do I kill my "inner" ssh?

    Read the article

  • How to get Synergy working on Ubuntu 11.10 and Windows 7?

    - by Linda
    I'm using Ubuntu 11.10 32-bit and Windows 7 64-bit, however, Synergy only works when a window (application or folder) is open and touching the edge of the screen where the mouse should "jump". In other words, if a window is open and maximized, Synergy works normally. Without any windows, the mouse does not jump to the other screen. My steps: (Ubuntu) apt-get install -y quicksynergy (Windows) Install Synergy (I've tried both 1.3.8 and 1.4.8 and both 32 and 64-bit) On Ubuntu 11.10 32-bit (Synergy Server config): ~/.quicksynergy/synergy.conf section: screens myubuntu: mywin7: end section: links myubuntu: right = mywin7 mywin7: left = myubuntu end On Ubuntu 11.10 32-bit: $ /usr/bin/synergys -f --config .quicksynergy/synergy.conf ... 2012-04-25T14:04:12 NOTE: client "mywin7" has connected /build/buildd/synergy-1.3.6/lib/server/CServer.cpp,287 (output hangs here) On Windows 7 64-bit: Synergy 1.3.8 Client on Microsoft Windows 7 x86 (WOW64) started client connecting to 'myubuntu': ###.###.###.###:24800 connected to server (output hangs here) At this point, things should work, but my mouse still can't change screens unless a window is maximized on my Ubuntu machine. Everything is running on port 24800. No firewall on Ubuntu. Firewall port 24800 open on Windows 7. This was previously working on Ubuntu 10.10 and Windows 7 (so only Ubuntu has been upgraded). I'm open to using either 32 or 64-bit on either server or client side, but I just want to get it working on Ubuntu 11.10 and Windows 7! I'm also using Ubuntu Classic (no effects), and not Unity.

    Read the article

  • How complex of a daemon should be run through inetd?

    - by amphetamachine
    What is the general rule for which daemons should be started up through inetd? Currently, on my server, sshd, apache and sendmail are set up to run all the time, where simple *NIX services are set up to be started by inetd. I'm the only one who uses ssh on my computer, and break-in attempts aren't a problem because I have it running on a non-standard port, and my HTTP server gets maybe 5 hits a day that aren't GoogleBot. My question is, what are the benefits vs. the performance hits associated with running a complex daemon like sshd or apache through superserver, and what, if any successes or failures have you had running your own daemons in this manner?

    Read the article

  • Kernel-mode Authentication: 401 errors when accessing site from remote machines

    - by CJM
    I have several Classic ASP sites that use Integrated Windows Authentication and Kerberos delegation. They work OK on the live servers (recently moved to a Server 2008/IIS7 servers), but do not work fully on my development PC or my development server. The IIS on both machines were configured through an IIS web deployment tool package which was exported from an old machine; the deployment didn't work perfectly, and I had to tinker a bit to get the sites working. When accessing the apps locally on either machine, they work fine; when accessing from another machine, the user is prompted by a username/password dialog, and regardless of what you enter, ultimately it results in a 401 (Unauthorised) error. I've tried comparing the configuration of these machines against similar live servers (that all work fine), and they seem generally comparable (given that none of the live servers are yet on IIS7.5 (Windows 7/Server 2008 R2). These applications run in a common application pool which uses a special domain user as it's identity - this user has similar permissions on the live and development machines. On IIS6 platforms, to enable kerberos delegation, I needed to set up some SPNs for this user, and they are still in place (even though I don't believe they are needed any longer for IIS7+ due to kernel-mode authentication), Furthermore, this account is enabled for Kerberos delegation in Active Directory, as is each machine I am dealing with. I'm considering the possibility that the deployment might have made changes/failed to make changes to the IIS configuration thus causing this problem. Perhaps a complete rebuild (minus another web deployment attempt) would solve the problem, but I'd rather fix (thus understand) the current problem. Any ideas so far? I've just had another attempt at fixing this issue, and I've made some progress, but I don't have a complete fix...yet. I've discovered that if I access the sites via IP address (than via NetBIOS name), I get the same dialog, except that it accepts my credentials and thus the application works - not quite a fix, but a useful step. More interestingly, I discovered that if I disable Kernel-mode authentication (in IIS Manager Website Authentication Advanced Settings), the applications work perfectly. My foggy understanding is that this is effectively working in the pre-IIS7 way. A reasonable short-term solution, but consider the following explicit advice from IIS on this issue: By default, IIS enables kernel-mode authentication, which may improve authentication performance and prevent authentication problems with application pools configured to use a custom identity. As a best practice, do not disable this setting if Kerberos authentication is used in your environment and the application pool is configured to use a custom identity. Clearly, this is not the way my applications should be working. So what is the issue?

    Read the article

< Previous Page | 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613  | Next Page >