Search Results

Search found 26176 results on 1048 pages for 'stream socket client'.

Page 906/1048 | < Previous Page | 902 903 904 905 906 907 908 909 910 911 912 913  | Next Page >

  • Apache2 Virtualhost practice config issue

    - by sisko
    I am practicing virtualhost configuration. In my /var/www directory I have created 3 directories called test1, test2 and test3 each of which has a simple index.php script in it. I:E test1/index.php etc. In /etc/apache2/sites-available/test1 I have the following configuration: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName test1 DocumentRoot /var/www/test1 <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/test1/> Options -Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> All the other sites have a similar virtualHost definition. I have enabled the site(the symlink appears in sites-enabled) and I have restarted apache. However, when I visit localhost/test1, I get a 404 Error. My error log show the following message: [Wed Oct 23 06:22:52 2013] [error] [client 127.0.0.1] File does not exist: /var/www/test1/test1 I don't know why I get the double test1/test1 in the error logs. I'm trying to find the right virtualHost setup which will allow all 3 test websites to be served from their URLs I:E test1/index.php, test2/index.php and test3/index.php. Can anyone help me out, please?

    Read the article

  • Selenium server won't start

    - by moff
    I'm getting the following error when trying to start selenium: C:\Temp\selenium-server-1.0.3java -jar selenium-server.jar 22:02:07.615 INFO - Java: Sun Microsystems Inc. 16.0-b13 22:02:07.617 INFO - OS: Windows 7 6.1 x86 22:02:07.625 INFO - v2.0 [a2], with Core v2.0 [a2] 22:02:07.811 INFO - RemoteWebDriver instances should connect to: http://127.0.0. 1:4444/wd/hub 22:02:07.813 INFO - Version Jetty/5.1.x 22:02:07.815 INFO - Started HttpContext[/selenium-server/driver,/selenium-server /driver] 22:02:07.817 INFO - Started HttpContext[/selenium-server,/selenium-server] 22:02:07.818 INFO - Started HttpContext[/,/] 22:02:07.866 INFO - Started org.openqa.jetty.jetty.servlet.ServletHandler@2bbd86 22:02:07.867 INFO - Started HttpContext[/wd,/wd] 22:02:07.870 WARN - Failed to start: [email protected]:4444 Exception in thread "main" org.openqa.jetty.util.MultiException[java.net.SocketE xception: Unrecognized Windows Sockets error: 0: JVM_Bind] at org.openqa.jetty.http.HttpServer.doStart(HttpServer.java:686) at org.openqa.jetty.util.Container.start(Container.java:72) at org.openqa.selenium.server.SeleniumServer.start(SeleniumServer.java:3 96) at org.openqa.selenium.server.SeleniumServer.boot(SeleniumServer.java:23 4) at org.openqa.selenium.server.SeleniumServer.main(SeleniumServer.java:19 8) java.net.SocketException: Unrecognized Windows Sockets error: 0: JVM_Bind at java.net.PlainSocketImpl.socketBind(Native Method) at java.net.PlainSocketImpl.bind(Unknown Source) at java.net.ServerSocket.bind(Unknown Source) at java.net.ServerSocket.(Unknown Source) at org.openqa.jetty.util.ThreadedServer.newServerSocket(ThreadedServer.j ava:391) at org.openqa.jetty.util.ThreadedServer.open(ThreadedServer.java:477) at org.openqa.jetty.util.ThreadedServer.start(ThreadedServer.java:503) at org.openqa.jetty.http.SocketListener.start(SocketListener.java:204) at org.openqa.jetty.http.HttpServer.doStart(HttpServer.java:716) at org.openqa.jetty.util.Container.start(Container.java:72) at org.openqa.selenium.server.SeleniumServer.start(SeleniumServer.java:3 96) at org.openqa.selenium.server.SeleniumServer.boot(SeleniumServer.java:23 4) at org.openqa.selenium.server.SeleniumServer.main(SeleniumServer.java:19 8) Java is installed: C:\Temp\selenium-server-1.0.3java -version java version "1.6.0_18" Java(TM) SE Runtime Environment (build 1.6.0_18-b07) Java HotSpot(TM) Client VM (build 16.0-b13, mixed mode, sharing) Thanks in advance

    Read the article

  • Poor SSL performance with vsftpd

    - by petrus
    I'm trying to tweak vsftpd to achieve maximum performance for my usage: I have only one or two clients that connect to the server. File size is between ~15MB and 1GB. Typical transfer batch represent between 1 and 2GB of data. For testing purposes, I'm using a tmpfs on both sides (thus eliminating any disks bottleneck) with a single 1GB file. When SSL is disabled, performance is good, with a transfer rate at ~120MB/s (reaching the limits of gigabit networking). With SSL enabled only for control traffic (and not data traffic), performance drops at about 112MB/s, which is still within the acceptable limits. However, when SSL is enabled for data flows, the transfer speed drops dramatically: 6.7MB/s using 3DES & SHA (ssl_ciphers=DES-CBC3-SHA in vsftpd.conf) 16MB/s using DES & SHA (ssl_ciphers=DES-CBC-SHA) I didn't tested other ciphers, but from what I can see from the CPU usage during the transfer, it seems that vsftpd is only using a single cpu/core per client. While this can fit for large ftp sites with hundreds of clients, I'd like to avoid this behavior and use more ressources on the server. On a side note, if you have any ideas regarding other openssl ciphers...

    Read the article

  • Excel not properly recalculating values

    - by gms8994
    I have an excel sheet with values in it (this sheet is generated by a custom perl script, but I don't think that's where the problem lies). In it, I have a formula: =sum(indirect(concatenate(address(6,column()),":",address(17,column())))) The purpose of this formula is to give me the SUM() of the cells in the current column, between rows 6 and 17. In Gnumeric Spreadsheet, as soon as I open the file, this works. But in Excel (both 2003 and 2007), opening the file gives #VALUE! errors in the fields with this formula, stating that the INDIRECT call with the values $B$6:$B$17 will result in an error. Here's the kink in the issue. If I edit the field (via F2), and make no changes, and hit enter, the values update. Also, it seems, if I save the file as .xlsx (Excel 2007 format), the values update upon opening. Unfortunately, I'm not sure that creating an xlsx is a possibility with the modules that I'm using, and many of our clients probably wouldn't be able to use it anyway. Any suggestions? Editing 200+ files every month for each client isn't going to be feasible, so if there's something I'm missing, please let me know.

    Read the article

  • Sql Server 2005 database lost, How to recover all records. MDF/LDF size is same as it should be

    - by Shantanu Gupta
    Few months back, I installed a sql server 2005 on one of my client machine. I gave him a backup option to take backup timely but he never took any backup. Today he called me that "i m not able to see any record of mine." I visited at my clients system and saw that none of the record was present on the tables. There was not even a single row in any of the tables. Then I checked if he has any backup file which i found to be absent. I asked him the reason what could be the possible cause. He said it might be due to virus. After this I checked the size of mdf and ldf file and found it should be what it is. when i created his server mdf ldf file had 2MB of database now it is 83 MB and 193Mb mdf/ldf respectively. This shows the data is still present in it but it is not being displayed. What could be the possible cause and how can i restore all data back to my tables ?

    Read the article

  • Amazon CloudFront and EC2: Global Load Balancing

    - by Matt Rogish
    We have an app that is going to store and serve up a decent amount of data in S3 to a global audience where latency should be minimized. So, we've been doing tests with Amazon CloudFront and have seen favorable results. However, we need a thin middleware layer (to do security etc.) and we'd like to put that in EC2. Due to security restrictions, this middleware layer will do the file streaming from S3/CloudFront: S3/CloudFront - EC2 - Clients We can geographically distribute the EC2 nodes (US East/West, and Ireland) but the problem is that a client in the EU would hit our US server and be fed data from there, thus rendering much of the performance benefit of CloudFront moot. I've been digging through the EC2 docs but I can't find a built-in way to get a geographically distributed version of EC2 a la CloudFront. Elastic Load Balancing sounds like the way to go, but I can't seem to find a way with that to direct based on routing... Preferably, we'd like to keep the amount of stuff outside of EC2/S3/etc. to a minimum (for obvious reasons). Any ideas how to do that within the EC2/S3 framework? DNS/routing tricks? Thanks!

    Read the article

  • Microsoft Windows DHCP: Steering IPv4 clients into specific scopes based on MAC

    - by Easter Sunshine
    We have visitors on our campus who bring their own laptops and devices and use our wireless and wired networks. When we receive a copyright infringement notice (typically BitTorrenting), we are required to quarantine that MAC address so that it no longer has Internet access. No matter what website it tries to visit, it is sent to a web page explaining to the user that the device has been quarantined. We have thus far implemented this in ISC DHCP on Linux. We have multiple VLANs with one or more public-IP subnets and one RFC1918 quarantine subnet each. All clients are leased IPs in the public-IP subnet(s) unless you're in a list of known bad MACs. Then, you are sent to the quarantine subnet so that your traffic is unroutable on the Internet (you are isolated by subnet only, not by VLAN). We would like to move to Windows DHCP in light of the IPAM role but I cannot figure out how to replicate this in Windows DHCP 2012 (Assign DHCP IPs for specific MAC prefixes on Windows Server 2008 R2 suggests it was not possible in 2008 R2), even while using policies. So here's what I'd like: The administrator/help desk provides and maintains a list of MAC addresses that are to be quarantined. The DHCP server places those MACs into the quarantine subnet on the respective VLAN, no matter which VLAN the client is in. I don't think reservations would work: We currently have about 300 registered bad MACs and about 12 VLANs. I don't want to make 300 x 12 reservations nor have to add 12 reservations per new MAC address. Not to mention all of the quarantine subnets are /24s. We do not have NPS/NAC. You do not have to register your MAC address get network access. We use Cisco routers/switches. Thanks.

    Read the article

  • Outlook error-'Can't open e-mail folders.You must connect to Exchange w/ current profile before you can sync folders w/ your Outlook data file'

    - by Emilio
    Note that the error message I put in the title of this question is abbreviated. The actual error message is below. I have an Exchange account and using Outlook 2010 as the client. I run in Cached Exchange Mode and have an .OST file locally. Recently I uninstalled and reinstalled office. I set up a new mail profile when prompted by Outlook 2010 upon first execution of the program. In my initial attempt, I pointed the data file at my existing OST file. In my second attempt, I had Outlook create a fresh empty file. In both cases I'm getting the error 'Cannot open your default e-mail folders. You must connect to Microsoft Exchange with the current profile before you can synchronize your folders with your Outlook data file (.ost).', also shown in this screenshot -- http://drop.io/4rc9v9o/asset/outlook-error-png. I don't know how to connect with the current profile - that's what I thought I did when I created a new .OST file? I've had this problem for several days so my OST file is now out of date. Once I get things running I obviously want my active mailbox to update the OST, not the other way around.

    Read the article

  • ESXi 4.1 host not recognising existing VMFS datastore

    - by ThatGraemeGuy
    Existing setup: host1 and host2, ESX 4.0, 2 HBAs each. lun1 and lun2, 2 LUNs belonging to the same RAID set (my terminology might be sketchy here). This has been working just fine all along. I added host3, ESXi 4.1, 2 HBAs. If I view Configuration / Storage Adapters, I can see that both HBAs see both LUNs, but if I view Configuration / Storage, I only see 1 datastore. host1/2 can see both LUNs and I have VMs running on both too. I have rescanned, refreshed and even rebooted, but host3 refuses to acknowledge 1 of the datastores. Does anyone know what's going on? Update: I re-installed the host with ESX (not i) 4.0, same version as the existing hosts and it's still not recognising the vmfs. I think I'm going to SVmotion everything off that datastore then format it. Update2: I've created the LUN from scratch and the problem gets even weirder. I've presented the LUN to all 3 hosts, and I can see the LUN in the vSphere client's Configuration / Storage Adapters section on all 3 hosts. If I create a datastore on the LUN via the Configuration / Storage section on host1, it works fine and I can create an empty folder via datastore browser, but the datastore is not seen by the host2 and host3. I can use the Add Storage wizard on host2 and it will see the LUN. At this point the "VMFS Label" column has the label I gave with "(head)" appended. If I try the Add Storage wizard's "Keep the existing signature" option, it fails with an error "Cannot change the host configuration." and a dialog box that says 'Call "HostStorageSystem.ResolveMultipleUnresolvedVmfsVolumes" for object "storageSystem-17" on vCenter Server "vcenter.company.local" failed.' If I try the Add Storage wizard's "Assign a new signature" option on host2, it will complete and the VMFS label will have "snap-(hexnumber)-" prepended. At this point its also visible on host3, but not host1. I have a similar setup in a different datacenter which didn't give me all this trouble.

    Read the article

  • OpenWrt vs DDWrt

    - by Ioan Paul Pirau
    I have a TP-Link Wr1043ND router and I want to install one of these two firmwares: OpenWRT DD-WRT I read that I can install custom packages and do much more than I can with the original firmware. I would like to ask someone with experience in using both OpenWRT and DD-WRT which he would recommend and why. And to give a few reference points I'm interested in: reliability – network stability both on cable and wireless and on the usb drive performance – network speed, very important also usb drive speed configurability – the possibility to add extensions such as a torrent client, FTP, SSH, WWW and SVN server directly ease of use – the ease of installation and configuration of the router support/docs – how much info there is if you stumble upon a problem and you have to find some documentation, or if there's any free support (but that's a longshot) Of course I don't imagine that I will find the perfect firmware and that one is vastly superior over the other. Also if there's anyone out there who uses one of these firmwares on a TP-Link Wr1043ND, it would be great to get some feedback about the impact of the changes from the original firmware. P.S. I'm open also for Tomato if it's the better one.

    Read the article

  • Squid SSL transparent proxy - SSL_connect:error in SSLv2/v3 read server hello A

    - by larryzhao
    I am trying to setup a SSL proxy for one of my internal servers to visit https://www.googleapis.com using Squid, to make my Rails application on that server to reach googleapis.com via the proxy. I am new to this, so my approach is to setup a SSL transparent proxy with Squid. I build Squid 3.3 on Ubuntu 12.04, generated a pair of ssl key and crt, and configure squid like this: http_port 443 transparent cert=/home/larry/ssl/server.csr key=/home/larry/ssl/server.key And leaves almost all other configurations default. The authorization of the dir that holds key/crt is drwxrwxr-x 2 proxy proxy 4096 Oct 17 15:45 ssl Back on my dev laptop, I put <proxy-server-ip> www.googleapis.com in my /etc/hosts to make the call goes to my proxy server. But when I try it in my rails application, I got: SSL_connect returned=1 errno=0 state=SSLv2/v3 read server hello A: unknown protocol And I also tried with openssl in cli: openssl s_client -state -nbio -connect www.googleapis.com:443 2>&1 | grep "^SSL" SSL_connect:before/connect initialization SSL_connect:SSLv2/v3 write client hello A SSL_connect:error in SSLv2/v3 read server hello A SSL_connect:error in SSLv2/v3 read server hello A Where did I do wrong?

    Read the article

  • Iptables Forwarding problem

    - by ankit
    Hi all, I had initally asked question about sertting up my linux box for natting for my home network and was given suggestions in the thread here. Did not want to clutter the old question so starting a new one here. based on the earlier suggestions, i have come up with the following rules ... :PREROUTING ACCEPT [1:48] :OUTPUT ACCEPT [12:860] :POSTROUTING ACCEPT [3:228] -A POSTROUTING -o eth0 -j MASQUERADE COMMIT *filter :INPUT DROP [3:228] :FORWARD DROP [0:0] :OUTPUT DROP [0:0] -A INPUT -i lo -j ACCEPT -A INPUT -i eth0 -p icmp -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 443 -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 22 -j ACCEPT -A FORWARD -i eth1 -p icmp -j ACCEPT -A FORWARD -i eth1 -p tcp -m tcp --dport 80 -j ACCEPT -A FORWARD -i eth1 -p tcp -m tcp --dport 443 -j ACCEPT -A OUTPUT -p icmp -j ACCEPT -A OUTPUT -j ACCEPT COMMIT If you notice, i do have the proper MASQURADING rule and the proper FORWARD filter rule as well. However i am facing 2 problems On the linux box itself DNS resolving is not working the lan clients connected to the linux box, are still not able to get to internet. when i ping something from them, i see the DROP count in iptables INPUT rule increasing. now my question is, when i am pinging something from the lan client, how come it is being matched by the input chain ?! should it be in the forward chain ? Chain INPUT (policy DROP 20 packets, 2314 bytes) pkts bytes target prot opt in out source destination 99 9891 ACCEPT all -- lo any anywhere anywhere 0 0 ACCEPT icmp -- eth0 any anywhere anywhere 0 0 ACCEPT tcp -- eth0 any anywhere anywhere tcp dpt:http 0 0 ACCEPT tcp -- eth0 any anywhere anywhere tcp dpt:https 122 9092 ACCEPT tcp -- eth0 any anywhere anywhere tcp dpt:ssh Thanks ankit

    Read the article

  • Outlook 2010 OSC with SharePoint 2010 documentation or help

    - by skypanther
    I'm looking for documentation or help to set up Outlook 2010's social connector to work with SharePoint 2010. I can't find much info other than some MS videos and blogs saying that it's possible. They say "we are already connected to SharePoint, so let's add in the XYZ connector..." but don't tell how they connected to SharePoint. Server A: Win2K8 x64, Exchange 2010 x64, domain controller Server B: Win2K8 x64, domain member, SQL Server 2008 R2, SharePoint 2010 Client: Win 7, domain member, with Office 2010 RTM (most every component installed, including the OSC) I've added my domain users to a group. In SharePoint, I granted that domain group permissions to access the site. As any of the domain users, I can use IE to log on the My Sites pages. From there, I can add lists, blog posts, status updates etc. In Outlook, I can add a SharePoint contact to my Contacts list. But, all I see in the People Pane is emails my users have exchanged. I don't see any of the other SharePoint status updates. When I try to use the "Add this person to a social network" link to connect them with my My Sites, the process fails. It doesn't give any sort of error message that helps me figure out why. Just a "Try Again" button that likewise fails. Any guides, links, suggestions? I'm pretty novice-level with SharePoint but mildly adept with the other technologies.

    Read the article

  • How do I make ESXi 5.0 to shutdown virtual machines when the physical power button is pushed?

    - by Pawel Sawicki
    I have a home NAS/DLNA server built out of an HP Micro Server with the HP branded VMware ESXi 5.0.0 build-623860 (free license) installed. Being a home media center I'd like it to be "manageable" by all my household members. This requires that it needs to be powered on an off (including all the VMs inside) by anybody with the physical access to the server by simply pressing the power button on the chassis. The "startup" part is easy to obtain - all I had to do was to configure the startup/shutdown policy: Once the server powers up, all VMs start as well and that's exactly what I need. Well.. it did work up until 5.0.0U1, but that's a different story: http://blogs.vmware.com/vsphere/2012/03/free-esxi-hypervisor-auto-start-breaks-with-50-update-1.html Unfortunately, pressing the power button doesn't gracefully shutdown the guest machines - they are terminated instead. If I run the "shut down" command from the vSphere Client interface guests are powered off. I'd like to get the same end result when the physical power button is switched. I've poked around a bit on the ESXi server. There's a "/sbin/shutdown.sh" script that seemed to do exactly what I need... but after trying it does exactly what the power off button. The "/etc/inittab" contains an entry for the "shutdown" level but I suppose it's not hooked to the power button. I can't find any acpi related configuration, neither do I know what exactly is executed when the power button is pressed. Does anybody have a clue how can I make the VMs shutdown automatically when the physical power switch is pressed to turn of the computer?

    Read the article

  • Database mirroring login failure attempts on mirror server

    - by Chandan
    I have configured database mirroring between two servers at a distance 40 miles away from each other. Server specifications: SQL Server 2008,Standard Edition 64-bit This is same for principal,mirror and witness. The configuration is high-safety with automatic failover Initially we tested our .net application(web application) on both the principal and mirror and made sure that the login is not orpahned. Things run fine generally.But sometimes on the mirror server,I see login failed attempts: Login failed for user 'd0main\user'. Reason: Failed to open the explicitly specified database. [CLIENT: xx.xx.x.x] Message Error: 18456, Severity: 14, State: 38. This error appears 3-4 times a day but not more than that. My question to the experts is:If the principal is alive so why the application tries to connect to mirror.The default time-out for a .net webpage is 30 seconds,so is it possible that the application tries to connect principal and after 30 seconds even if principal is alive,it assumes that it is dead and thus tries to open a connection to mirror where it fails. Please help me with this problem.

    Read the article

  • Photoshop CS6 Corrupted File recovery

    - by Ben Franchuk
    Last night I was working on a client application mock-up in photoshop, but was goin to take a break from my work so I saved the .PSD file on my internal HDD and put my computer into stand-by mode once the file had finished saving. Unfortunately my computer crashed while it was entering stand-by and shut itself down (photoshop was still open). I did not boot it again to make sure all my files were ok because they had already been saved, but today once I opened up the file again it was extremely corrupted and also completely un-editable (screenshot bellow). so what im asking is there any way to recover my work, or at least some of it? i have put in a good few days work on this project and would hate to have to restart it. the size of the file is 3070 KB, even though it reads as 712 KB in photoshop. i dont know if these file sizes are larger or either smaller than the original non-corrupted file's size, but considering all the layers in the file i suspect it was larger before it corrupted. im using windows XP professional 32bit SP3. both my OS and said .PSD file are located on the same internal HDD (74.4 GB). i do have an external HDD (1.5 TB) but i primarily only use it for movies music and tv shows. i dont know if it was plugged in t the time of me editing the document last, though, if it means anything. i have tried many image and PSd recovery softwares but none have returned any results that may help recover my work. edit: i tried using a photo reccovery software (odboso Photorecovery) that actually seems to recover the corrupted file in question judging by the size of the file, but i cannot recover it because of the licence fee. knowing that the file is still likely on my HDD, what location might it be located?

    Read the article

  • Install IIS on Server 2003 unattended via PowerShell as a service user (no terminal session)

    - by maik
    I've been racking my brain with this for a bit and figured I would ask here to see if anyone could enlighten me. As the title says, I'm trying to install the IIS role on Server 2003 using an unattended install method launched via a service. We're using RightScale and most of what we want to accomplish is pretty straightforward. I created an unattend file for use with sysocmgr.exe: [Components] iis_common = ON iis_www = ON iis_www_vdir_scripts = ON iis_inetmgr = ON fp_extensions = ON iis_ftp = ON And I invoke it like so: sysocmgr.exe /i:%windir%\inf\sysoc.inf /u:C:\path\to\iis-unattend.txt /r /x /q If I run that from a command prompt while logged in as Administrator it works just fine, but if it runs via RightScript (the RightScale user on the server, which is a local admin) it fails somewhere in the middle and the logs I get are rather unhelpful. The thing is I can do this same thing with the SNMP Client (which is a Windows component, not a server role) and it works with no problems while run via the script service user. My best guess is that sysocmgr.exe is expecting a GUI element to be there during the role installation and since the service user has no terminal session it coughs and dies. That's just a wild stab in the dark.

    Read the article

  • How can I run Gnome or KDE locally in Cygwin?

    - by John Peter Thompson Garcés
    Apparently it is possible to do this using cygwin ports, as can be seen in screenshots. I followed this how-to to get apt-cygports set up, and I used it to install gnome-session. This how-to supposedly gives the commands needed to run Gnome or KDE, but whenever I try to run Gnome, a blank X-window pops up and then quickly disappears. Here is the terminal output: $ startx /usr/bin/dbus-launch gnome-session xauth: file /home/jpthomps/.serverauth.4168 does not exist Welcome to the XWin X Server Vendor: The Cygwin/X Project Release: 1.10.3.0 OS: Windows 7 Service Pack 1 [Windows NT 6.1 build 7601] (WoW64) Package: version 1.10.3-12 built 2011-08-22 XWin was started with the following command line: /usr/bin/X :0 -auth /home/jpthomps/.serverauth.4168 (II) xorg.conf is not supported (II) See http://x.cygwin.com/docs/faq/cygwin-x-faq.html for more information LoadPreferences: /home/jpthomps/.XWinrc not found LoadPreferences: Loading /etc/X11/system.XWinrc LoadPreferences: Done parsing the configuration file... winDetectSupportedEngines - DirectDraw installed, allowing ShadowDD winDetectSupportedEngines - Windows NT, allowing PrimaryDD winDetectSupportedEngines - DirectDraw4 installed, allowing ShadowDDNL winDetectSupportedEngines - Returning, supported engines 0000001f winSetEngine - Using Shadow DirectDraw NonLocking winScreenInit - Using Windows display depth of 32 bits per pixel winFinishScreenInitFB - Masks: 00ff0000 0000ff00 000000ff Screen 0 added at virtual desktop coordinate (0,0). MIT-SHM extension disabled due to lack of kernel support XFree86-Bigfont extension local-client optimization disabled due to lack of shared memory support in the kernel (II) AIGLX: Loaded and initialized /usr/lib/dri/swrast_dri.so (II) GLX: Initialized DRISWRAST GL provider for screen 0 winPointerWarpCursor - Discarding first warp: 637 478 (--) 5 mouse buttons found (--) Setting autorepeat to delay=500, rate=31 (--) Windows keyboard layout: "00000409" (00000409) "US", type 4 (--) Found matching XKB configuration "English (USA)" (--) Model = "pc105" Layout = "us" Variant = "none" Options = "none" Rules = "base" Model = "pc105" Layout = "us" Variant = "none" Options = "none" winBlockHandler - pthread_mutex_unlock() winProcEstablishConnection - winInitClipboard returned. winClipboardProc - DISPLAY=:0.0 winClipboardProc - XOpenDisplay () returned and successfully opened the display. xinit: XFree86_VT property unexpectedly has 0 items instead of 1 xinit: connection to X server lost waiting for X server to shut down winClipboardProc - winClipboardFlushWindowsMessageQueue trapped WM_QUIT message, exiting main loop. winClipboardProc - XDestroyWindow succeeded. winClipboardProc - Clipboard disabled - Exit from server winDeinitMultiWindowWM - Noting shutdown in progress

    Read the article

  • Using our own certificate authority for business email encryption

    - by LumenAlbum
    I've read the available similar questions on serverfault but I haven't quite found a definite answer to the security aspect of it - hence here's my question: I'm administrator of an office working with tax data and we want to start using certificate-based eMail encryption with our clients. Considering the prices for issued certificates by VeriSign & Co I was wondering if we couldn't issue the necessary certificates with a certificate authority of our own. I realize that they do not offer the trust hierarchy that commercial certificates do but I don't see why we would need that. Most of our clients have small businesses and only 20% of them even exchange data with us via email. So if we were to issue certificates for those 20% and our employees, that would enable us to use encrypted emails. Of course they would have to trust our certificate authority and thus once receive our public root certificate. But if we would hand them out to them (or install it) personally, they'd know that it really is our certificate. Is thery a huge security risk that I am missing here? As long as nobody has access to our certificate authority server nobody should be able to interfere with security, right? And the client certificates would be generated and handed out by us, as well... Please advise me if I am making an error in judgement here and thank you in advance.

    Read the article

  • Multicast image restoration with adaptive speed

    - by Clinton Blackmore
    I'm curious to know if there are any tools for restoring disk images (or even transferring files) via multicast -- for any platform, especially if the project has source available -- where the multicast rate adjusts itself on the fly. On the Mac, all multicast solutions I am aware of (such as Deploy Studio, and NetRestore before it) make use of multicast ASR (apple software restore), which has one glaring deficiency -- you have to set the multicast speed before you start sending a disk image over the network, and that speed is locked in. Either your clients can keep up and restore, or they can't*. It seems to me that it must be possible for the multicast server to adjust the data rate, so you basically say "start sending this image", clients connect, and, if they can't keep up, they tell the server so it slows down. (Likewise, I'd expect the server to try speeding up if no client is having difficulties keeping up, and I'd expect to be able to cap that maximum throughput so that other network activities can go on without being resource starved.) So, what sort of tools are out there? For Linux? Windows? Is there something for the Mac I've overlooked. [It just kills me that it is true that, by the time you get multicast up and going at a good speed to restore a lab, you could've unicasted the data to all the computers and be done.] * There is a little leeway involved. I think individual clients can say, "I missed a little bit of data" and get it, and they can opt to listen in the next time the image is sent over the network, but on the whole, if they missed it the first go round, you have to image the machine again, and there is no time savings.

    Read the article

  • Postfix a lot of relay acces denied errors in maillog

    - by tester3
    I'm on Centos 6.5 with Postfix/Dovecot and some virtual domains. Postfix works fine, but I've got a lot of messages like this "NOQUEUE: reject: RCPT from 1-160-127-12.dynamic.hinet.net[1.160.127.12]: 454 4.7.1 : Relay access denied; from= to= proto=SMTP" in my maillog. I've tried to close port 25 with iptables, when I do so - I got no such messages, but my mail system starts work incorrectly and can't receive mail from other hosts. Please help! My postconf -n: alias_database = $alias_maps alias_maps = hash:/etc/postfix/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 html_directory = no inet_interfaces = all inet_protocols = ipv4 mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man message_size_limit = 20971520 mydestination = localhost.$mydomain, localhost newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES relay_domains = * sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtp_tls_cert_file = /etc/pki/tls/certs/example.com.crt smtp_tls_key_file = /etc/pki/tls/private/example.com.key smtp_tls_loglevel = 1 smtp_tls_session_cache_database = btree:/etc/postfix/smtp_tls_session_cache smtp_tls_session_cache_timeout = 3600s smtp_use_tls = yes smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = example.com smtpd_sasl_path = /var/run/dovecot/auth-client smtpd_sasl_security_options = noanonymous smtpd_sasl_tls_security_options = $smtpd_sasl_security_options smtpd_sasl_type = dovecot smtpd_tls_cert_file = /etc/pki/tls/certs/example.com.crt smtpd_tls_key_file = /etc/pki/tls/private/example.com.key smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_session_cache_database = btree:/etc/postfix/smtpd_tls_session_cache smtpd_tls_session_cache_timeout = 3600s smtpd_use_tls = yes soft_bounce = yes tls_random_source = dev:/dev/urandom unknown_local_recipient_reject_code = 550 virtual_alias_maps = hash:/etc/postfix/vmail_aliases virtual_gid_maps = static:2222 virtual_mailbox_base = /var/vmail virtual_mailbox_domains = hash:/etc/postfix/vmail_domains virtual_mailbox_maps = hash:/etc/postfix/vmail_mailbox virtual_minimum_uid = 2222 virtual_transport = virtual virtual_uid_maps = static:2222 Please help! Will attach master.cf or anything other if needed.

    Read the article

  • Procmail Postfix issue

    - by Blucreation
    Our server is using CENTOS uses postfix: Nov 1 11:31:52 webserver postfix/smtpd[30424]: 822A91872F: client=unknown[5.133.168.42], sasl_method=PLAIN, [email protected] Nov 1 11:31:52 webserver postfix/cleanup[30427]: 822A91872F: message-id=<[email protected]> Nov 1 11:31:52 webserver postfix/qmgr[1067]: 822A91872F: from=<[email protected]>, size=620, nrcpt=1 (queue active) Nov 1 11:31:52 webserver postfix/virtual[30505]: 822A91872F: to=<[email protected]>, relay=virtual, delay=0.12, delays=0.12/0/0/0, dsn=2.0.0, status=sent (delivered to maildir) Nov 1 11:31:52 webserver postfix/qmgr[1067]: 822A91872F: removed Nov 1 11:31:52 webserver postfix/smtpd[30424]: disconnect from unknown[5.133.168.42] I have this in my etc/postfix/main.cf: mailbox_command = /usr/bin/procmail -a "$EXTENSION" My etc/procmailrc contains: PATH="/usr/bin" SHELL="/bin/bash" LOGFILE="/var/log/procmail.log" VERBOSE="YES" LOG="#TEST#" I don't think procmail is picking up on my procmailrc as nothing ever gets logged from normal emails. If i type this: procmail DEFAULT=/dev/null VERBOSE=yes LOGFILE=/var/log/procmail.log /dev/null </dev/null I get entries in my log file so i know procmail is working Am i doing something wrong? am i missing something? I eventually want my rule to call a php script only if the subject contains "SUPPORT TICKET" and the to is "[email protected]" but that's once i this issue solved.

    Read the article

  • Windows XP seemingly out of resources but plenty of free RAM and swap available

    - by Artem Russakovskii
    This one has been bothering me for years and so far I couldn't find an adequate solution. The problem occurs on pretty much every XP install I've done. After opening a variety of programs or the system running existing programs for a while, Windows seemingly runs out of resources, without telling me. There's ALWAYS free RAM. For example, it just happened to me and I had over a gig of free RAM. There are no viruses, spyware, or other nonsense - it is a Windows resource problem, but the question is which resource is it running out of, how does one pinpoint it, and how does one prevent it? Sometimes, this happens after running specific programs - for example, today it happened when I started Photoshop CS4 and Flash CS4 at the same time. I also noticed that restarting The Bat (email client by Ritlabs) seems to get rid of this problem for a while but again, this happens on machines that don't even have The Bat installed. So what does exactly happen? The symptoms are: pressing alt-tab doesn't bring up the list anymore - it just jumps to the next window instantly, very similar to the way Alt-Esc works, however in this case, it's due to not having enough resources to bring up the alt-tab menu random programs would randomly crash, citing random errors, out of memory errors, system resources, inabilities to do system calls, etc. random programs would start missing random parts - for example, Firefox top menus might disappear, pull up partial selections, or not pull up anymore altogether. IE might lose a few of its toolbars. Some programs might fail to redraw or would just plain go gray where the UI used to be. Windows itself never complains about running out of RAM, virtual memory, or anything at all, yet it's running out of something. The only clue I was able to find and apply the fix today was this Desktop Heap Limitation. I haven't confirmed the fix working as not enough time passed. In the meantime, what are everyone's thoughts?

    Read the article

  • Adding a Second Wireless Router to an Existing Wired Network

    - by KVCrawford
    I apologize ahead of time, I know this has been asked before, but I'm still having problems...maybe you guys can help. I started out with the basic instructions from the highest-voted answer at http://serverfault.com/questions/41572/adding-a-second-wireless-router-to-my-network The new Wireless router in question is a Linksys Wireless-N Gigabit Router, Model # WRT310N Here are the steps I've taken in setting it up: Plug my laptop into LAN port #2 in the new router. Nothing else is connected at this point Configure the new router to be 192.168.1.200 (the original router is 192.168.1.1, and its DHCP clients are from 192.168.1.100-x.x.x.199) Set the internet connection on the new router to "DHCP Client" Turn off the DHCP server & NAT routing on the new router Plug in a LAN cable from the original router into the LAN port #1 on the new router (NOT the WAN port, nothing is plugged in there) Reset the new router Afterwards, I try to ping 192.168.1.1 from the laptop plugged into LAN port #2 on the new router, with no response. 192.168.1.200 garners no response either. Typing "ipconfig" tells me: Autoconfiguration IP Address: 169.254.198.113 Subnet Mask: 255.255.0.0 Default Gateway: 169.254.198.113 What's going wrong? I appreciate any help!

    Read the article

  • Loading dependencies for custom puppet functions

    - by Ben Smith
    I have written a custom puppet function, which is working fine, that depends on the cloudservers gem (a Rackspace client library). This is fine if I have pre-installed the gem on a server before running puppet but totally breaks if I have not installed the gem as the function seems to be run during the 'compilation' sweep, well before my package definition is realised. Here's what my .pp looks like, with get_hosts the function that requires the cloudservers gem. package { "rubygems": ensure => installed, provider => "gem"; } package { "cloudservers": ensure => installed, provider => "gem", require => Package["rubygems"]; } class hosts::us { $hosts = get_hosts("us") hostentry { $hosts: } } define hostentry() { $parts = split($name, ',') $address = $parts[0] $ip = $parts[1] $aliases = $parts[2] host{ $address: ip => $ip, host_aliases => $aliases } } Is there a way to stop the function getting run so early, or at least having it's run depend up the library being installed. Alternatively, is there a way that I can add dependencies somewhere in the functions folder that will be available to the function?

    Read the article

< Previous Page | 902 903 904 905 906 907 908 909 910 911 912 913  | Next Page >