Search Results

Search found 6167 results on 247 pages for 'sccm 2012'.

Page 210/247 | < Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >

  • Enabling mod_wsgi in Apache for a Django app on Gentoo

    - by hobbes3
    I installed Apache, Django, and mod_wsgi on Gentoo using emerge (on Amazon EC2). I know that the mod_wsgi is configured in /etc/apache2/modules.d/70_mod_wsgi.conf: <IfDefine WSGI> LoadModule wsgi_module modules/mod_wsgi.so </IfDefine> # vim: ts=4 filetype=apache So in my /etc/conf.d/apache I added the WSGI module: APACHE2_OPTS="-D DEFAULT_VHOST -D INFO -D SSL -D SSL_DEFAULT_VHOST -D LANGUAGE -D WSGI" But when I try to list the loaded module, mod_wsgi isn't listed. root ~ # apache2 -M | grep wsgi Syntax OK I also know that mod_wsgi isn't loading properly because the Apache configuration file doesn't recognize WSGIScriptAlias. By the way for Django to work I need to include a custom Apache configuration file. Where should I insert the line below? Include "/var/www/localhost/htdocs/mysite/apache/apache_django_wsgi.conf" I currently have that in the httpd.conf file but I feel like that file will get reseted whenever I upgrade Gentoo or related package. EDIT: it seems the mod_wsgi file is located in /usr/lib64/apache2/modules/mod_wsgi.so. Here is my detailed Apache settings: root@ip-99-99-99-99 /usr/portage/eclass # apache2 -V Server version: Apache/2.2.21 (Unix) Server built: Mar 7 2012 06:52:30 Server's Module Magic Number: 20051115:30 Server loaded: APR 1.4.5, APR-Util 1.3.12 Compiled using: APR 1.4.5, APR-Util 1.3.12 Architecture: 64-bit Server MPM: Prefork threaded: no forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="/usr" -D SUEXEC_BIN="/usr/sbin/suexec" -D DEFAULT_PIDLOG="/var/run/httpd.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="/var/run/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="/etc/apache2/mime.types" -D SERVER_CONFIG_FILE="/etc/apache2/httpd.conf"

    Read the article

  • How do I convert a video to GIF using ffmpeg, with reasonable quality?

    - by Kamil Hismatullin
    I'm converting .flv movie to .gif file with ffmpeg. ffmpeg -i input.flv -ss 00:00:00.000 -pix_fmt rgb24 -r 10 -s 320x240 -t 00:00:10.000 output.gif It works great, but output gif file has a very law quality. Any ideas how can I improve quality of converted gif? Output of command: $ ffmpeg -i input.flv -ss 00:00:00.000 -pix_fmt rgb24 -r 10 -s 320x240 -t 00:00:10.000 output.gif ffmpeg version 0.8.5-6:0.8.5-0ubuntu0.12.10.1, Copyright (c) 2000-2012 the Libav developers built on Jan 24 2013 14:52:53 with gcc 4.7.2 *** THIS PROGRAM IS DEPRECATED *** This program is only provided for compatibility and will be removed in a future release. Please use avconv instead. Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'input.flv': Metadata: major_brand : mp42 minor_version : 0 compatible_brands: isommp42 creation_time : 2013-02-14 04:00:07 Duration: 00:00:18.85, start: 0.000000, bitrate: 3098 kb/s Stream #0.0(und): Video: h264 (High), yuv420p, 1280x720, 2905 kb/s, 25 fps, 25 tbr, 50 tbn, 50 tbc Metadata: creation_time : 1970-01-01 00:00:00 Stream #0.1(und): Audio: aac, 44100 Hz, stereo, s16, 192 kb/s Metadata: creation_time : 2013-02-14 04:00:07 [buffer @ 0x92a8ea0] w:1280 h:720 pixfmt:yuv420p [scale @ 0x9215100] w:1280 h:720 fmt:yuv420p -> w:320 h:240 fmt:rgb24 flags:0x4 Output #0, gif, to 'output.gif': Metadata: major_brand : mp42 minor_version : 0 compatible_brands: isommp42 creation_time : 2013-02-14 04:00:07 encoder : Lavf53.21.1 Stream #0.0(und): Video: rawvideo, rgb24, 320x240, q=2-31, 200 kb/s, 90k tbn, 10 tbc Metadata: creation_time : 1970-01-01 00:00:00 Stream mapping: Stream #0.0 -> #0.0 Press ctrl-c to stop encoding frame= 101 fps= 32 q=0.0 Lsize= 8686kB time=10.10 bitrate=7045.0kbits/s dup=0 drop=149 video:22725kB audio:0kB global headers:0kB muxing overhead -61.778676% Thanks.

    Read the article

  • Which MAC address is the right one?

    - by Paul Dinh
    Result by 'getmac': C:\>getmac Physical Address Transport Name =================== ========================================================== 72-03-C6-48-59-34 \Device\Tcpip_{8AEB3263-18C4-449E-A80F-BC2541DDC2A9} 00-21-9B-D5-6F-EE \Device\Tcpip_{C2F9CE19-D68F-4105-9766-45CBE6D82331} 00-22-68-D2-9B-F7 \Device\Tcpip_{A2701130-9221-43FE-8F14-7B1114F84DC3} Result by 'ipconfig /all': C:\>ipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : xps-m1530 Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Mixed IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No Ethernet adapter Wireless Network Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Dell Wireless 1395 WLAN Mini-Card Physical Address. . . . . . . . . : 00-22-68-D2-9B-F7 Dhcp Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Autoconfiguration IP Address. . . : 169.254.246.4 Subnet Mask . . . . . . . . . . . : 255.255.0.0 Default Gateway . . . . . . . . . : Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Marvell Yukon 88E8040 PCI-E Fast Eth ernet Controller Physical Address. . . . . . . . . : 00-21-9B-D5-6F-EE Dhcp Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IP Address. . . . . . . . . . . . : 192.168.1.112 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.1.1 DHCP Server . . . . . . . . . . . : 192.168.1.1 DNS Servers . . . . . . . . . . . : 8.8.8.8 8.8.4.4 Lease Obtained. . . . . . . . . . : 01 November 2012 9:00:36 AM Lease Expires . . . . . . . . . . : 04 November 2012 9:00:36 AM There is a MAC address on the back of my laptop, but the sticker is no longer there. So I use the 'getmac' command to get the MAC addresses. But which address shown by 'getmac' above is the one matching the MAC in the sticker on the back of my laptop? Or am I mistaken something? 00-21-... is the ethernet adapter, 00-22-... is the wireless adapter, and 72-03-... is what?

    Read the article

  • Unable to PPTP through NAT on Cisco 881

    - by MasterRoot24
    I'm trying to connect to a PPTP server which is sat behind a Cisco 881 NAT router. The server is running Ubuntu Server 12.04 and is running Poptop pptpd as the PPTP daemon listening for connections. As discussed in my other question, I'm trying to setup a Cisco 881 router to replace my old Linksys WAG320N. This same server and WAN connection worked fine with the WAG320N with no special configuration, other than allowing 1723 in through the firewall. On the Cisco 881, I'm using the newer ip nat enable or NAT NVI to setup static routes in through the firewall for the services running behind the router. My reason being that I can't run another copy of my live DNS domains internally with local IP addresses in. For the purposes of this question, though, I have rebuilt the router with ip nat inside/outside style NAT'ing, but this issue is still apparent. HTTP/SMTP/IMAP etc. all work ok from both the WAN and LAN interfaces of the router. I'm only having issues with SIP (see other question) and PPTP. My issue is that the GRE doesn't appear to be passing through NAT correctly and one end of the connection is not receiving GRE traffic when it should be, so the server hangs up the connection. Here's an example of /var/log/syslog with debug enabled in /etc/pptpd.conf: Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: MGR: Launching /usr/sbin/pptpctrl to handle client Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: local address = 192.168.1.50 Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: remote address = 192.168.1.51 Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: pppd options file = /etc/ppp/pptpd-options Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Client 82.132.248.216 control connection started Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Received PPTP Control Message (type: 1) Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Made a START CTRL CONN RPLY packet Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: I wrote 156 bytes to the client. Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Sent packet to client Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Received PPTP Control Message (type: 7) Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Set parameters to 100000000 maxbps, 64 window size Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Made a OUT CALL RPLY packet Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Starting call (launching pppd, opening GRE) Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: pty_fd = 6 Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: tty_fd = 7 Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: I wrote 32 bytes to the client. Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Sent packet to client Dec 11 21:06:30 <HOSTNAME> pptpd[22627]: CTRL (PPPD Launcher): program binary = /usr/sbin/pppd Dec 11 21:06:30 <HOSTNAME> pptpd[22627]: CTRL (PPPD Launcher): local address = 192.168.1.50 Dec 11 21:06:30 <HOSTNAME> pptpd[22627]: CTRL (PPPD Launcher): remote address = 192.168.1.51 Dec 11 21:06:30 <HOSTNAME> pppd[22627]: Plugin /usr/lib/pptpd/pptpd-logwtmp.so loaded. Dec 11 21:06:30 <HOSTNAME> pppd[22627]: pppd 2.4.5 started by root, uid 0 Dec 11 21:06:30 <HOSTNAME> pppd[22627]: Using interface ppp0 Dec 11 21:06:30 <HOSTNAME> pppd[22627]: Connect: ppp0 <--> /dev/pts/3 Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: GRE: Bad checksum from pppd. Dec 11 21:06:31 <HOSTNAME> pptpd[22626]: CTRL: Received PPTP Control Message (type: 15) Dec 11 21:06:31 <HOSTNAME> pptpd[22626]: CTRL: Got a SET LINK INFO packet with standard ACCMs Dec 11 21:07:00 <HOSTNAME> pppd[22627]: LCP: timeout sending Config-Requests Dec 11 21:07:00 <HOSTNAME> pppd[22627]: Connection terminated. Dec 11 21:07:00 <HOSTNAME> avahi-daemon[1042]: Withdrawing workstation service for ppp0. Dec 11 21:07:00 <HOSTNAME> pppd[22627]: Modem hangup Dec 11 21:07:00 <HOSTNAME> pppd[22627]: Exit. Dec 11 21:07:00 <HOSTNAME> pptpd[22626]: GRE: read(fd=6,buffer=6075a0,len=8196) from PTY failed: status = -1 error = Input/output error, usually caused by unexpected termination of pppd, check option syntax and pppd logs Dec 11 21:07:00 <HOSTNAME> pptpd[22626]: CTRL: PTY read or GRE write failed (pty,gre)=(6,7) Dec 11 21:07:00 <HOSTNAME> pptpd[22626]: CTRL: Reaping child PPP[22627] Dec 11 21:07:00 <HOSTNAME> pptpd[22626]: CTRL: Client 82.132.248.216 control connection finished Dec 11 21:07:00 <HOSTNAME> pptpd[22626]: CTRL: Exiting now Dec 11 21:07:00 <HOSTNAME> pptpd[5803]: MGR: Reaped child 22626 As far as Cisco are concerned, all I need is ip nat source static tcp <SERVER LAN IP> 1723 interface FastEthernet4 1723 but of course this doesn't seem to the be helping the GRE traffic through as it should. Trying the connection to the LAN IP of the server from the same LAN as the server (behind the router), the PPTP connection works fine, so I'm confident that the server's config is ok. Furthermore, all I needed on my WAG320N was to open 1723 in the firewall. Here's my current router config: ! ! Last configuration change at 20:20:15 UTC Tue Dec 11 2012 by xxx version 15.2 no service pad service timestamps debug datetime msec service timestamps log datetime msec service password-encryption ! hostname xxx ! boot-start-marker boot-end-marker ! ! enable secret 4 xxxx ! aaa new-model ! ! aaa authentication login local_auth local ! ! ! ! ! aaa session-id common ! memory-size iomem 10 ! crypto pki trustpoint TP-self-signed-xxx enrollment selfsigned subject-name cn=IOS-Self-Signed-Certificate-xxx revocation-check none rsakeypair TP-self-signed-xxx ! ! crypto pki certificate chain TP-self-signed-xxx certificate self-signed 01 xxx quit ip gratuitous-arps ip auth-proxy max-login-attempts 5 ip admission max-login-attempts 5 ! ! ! ! ! ip domain list dmz.xxx.local ip domain list xxx.local ip domain name dmz.xxx.local ip name-server 192.168.1.x ip cef login block-for 3 attempts 3 within 3 no ipv6 cef ! ! multilink bundle-name authenticated license udi pid CISCO881-SEC-K9 sn xxx ! ! username admin privilege 15 secret 4 xxx username joe secret 4 xxx ! ! ! ! ! ip ssh time-out 60 ! ! ! ! ! ! ! ! ! interface FastEthernet0 no ip address ! interface FastEthernet1 no ip address ! interface FastEthernet2 no ip address ! interface FastEthernet3 switchport access vlan 2 no ip address ! interface FastEthernet4 ip address dhcp ip nat enable duplex auto speed auto ! interface Vlan1 ip address 192.168.1.x 255.255.255.0 no ip redirects no ip unreachables no ip proxy-arp ip nat enable ! interface Vlan2 ip address 192.168.0.x 255.255.255.0 ! ip forward-protocol nd ip http server ip http access-class 1 ip http authentication local ip http secure-server ! ! ip nat source list 1 interface FastEthernet4 overload ip nat source list 2 interface FastEthernet4 overload ip nat source static tcp 192.168.1.x 1723 interface FastEthernet4 1723 ! ! access-list 1 permit 192.168.0.0 0.0.0.255 access-list 2 permit 192.168.1.0 0.0.0.255 ! ! ! ! control-plane ! ! banner motd Authorized Access only ! line con 0 exec-timeout 15 0 login authentication local_auth line aux 0 exec-timeout 15 0 login authentication local_auth line vty 0 4 access-class 2 in login authentication local_auth length 0 transport input all ! ! end UPDATE 16/12/2012: The only progress that I have been able to make on this issue is that I'm confident that the issue is caused by the GRE tunnels (which are required for the PPTP connection to complete) are being blocked. When attempting a connection, I can see in show ip nat nvi translations that both a TCP translation on 1723 is setup and also a GRE translation is setup also. I appear to be able to see GRE related packets on the LAN that the server is on, so I am lead to believe that the server is sending(?) GRE packets, however running Wireshark on a client PC when attempting a connection shows absolutely no GRE packets. Whilst there are no configuration directives in my config posted above (that I can pin point) which would specifically block them, it would appear that the GRE packets are not being allowed in/out of the router's firewall, even though a NAT translation entry is setup to the server's LAN address. Would anyone be able to provide me with some help to ensure that GRE packets are not blocked by the router's firewall, so that this can be ruled out as a possible issue please?

    Read the article

  • Anti Virus Service does not run - Windows XP SP3 32bit Home

    - by Stefan Fassel
    I have a somewhat strange problem here. I am trying to run Anti Virus Software on my Windows XP Home 32bit System. After a serious crash I had to fall back to an outdated copy of my initial installation and had Windows install 5 years of updates. So far so good. After Intalling a new Anti Virus Software (Bitdefender 2012) everything seemed to be fine, initial scanning went fine and configuration was working. But after restarting the System the Virus Scanner was unable to start up again. Even the Configuration console of the AV Software did not start. I tried scanning the System for malware, but nothing was found. Then I tried a different AV Software (MS Security Essentials), but in the end it did fail to start too. I have tried to start the Service manually, but I seem to be missing the privilege to do so. I am logged in as a Non-"Administrator" User with Admin privileges (Not much choices there on a XP Home System). I cannot switch to Administrator account outside the protected mode. When running Windows in protected mode I am unable to start the AV Software because it does not run in protected mode. I am a bit at loss now...

    Read the article

  • SVN Authentication with LDAP and Active Directory

    - by Alex Holsgrove
    I am having a few problems getting SVN authentication to work with LDAP / Active Directory. My SVN installation works fine, but after enabling LDAP in my apache vhost, I just can't get my users to authenticate. I can use a selection of LDAP browsers to successfully connect to Active Directory, but just can't seem to get this to work. SVN is setup in /var/local/svn Server is svn.domain.local For testing, my repository is /var/local/svn/test My vhost file is as follows: <VirtualHost *:80> ServerAdmin [email protected] ServerAlias svn.domain.local ServerName svn.domain.local DocumentRoot /var/www/svn/ <Location /test> DAV svn #SVNListParentPath On SVNPath /var/local/svn/test AuthzSVNAccessFile /var/local/svn/svnaccess AuthzLDAPAuthoritative off AuthType Basic AuthName "SVN Server" AuthBasicProvider ldap AuthLDAPBindDN "CN=adminuser,OU=SBSAdmin Users,OU=Users,OU=MyBusiness,DC=domain,DC=local" AuthLDAPBindPassword "admin password" AuthLDAPURL "ldap://192.168.1.6:389/OU=SBSUsers,OU=Users,OU=MyBusiness,DC=domain,DC=local?sAMAccountName?sub?(objectClass=*)" Require valid-user </Location> CustomLog /var/log/apache2/svn/access.log combined ErrorLog /var/log/apache2/svn/error.log </VirtualHost> In my error.log, I don't seem to get any bind errors (should I be looking elsewhere?), but just the following: [Thu Jun 21 09:51:38 2012] [error] [client 192.168.1.142] user alex: authentication failure for "/test/": Password Mismatch, referer: http://svn.domain.local/test/ At the end of "AuthLDAPURL", I have seen people using TLS and NONE but neither seem to help in my case. I have the ldap modules loaded and have checked as much as I know, so any help would be most welcome. Thanks

    Read the article

  • Backup script to FTP with timed subfolders

    - by Frederik Nielsen
    I want to make a backup script, that makes a .tar.gz of a folder I define, say fx /root/tekkit/world This .tar.gz file should then be uploaded to a FTP server, named by the time it was uploaded, for example: 07-10-2012-13-00.tar.gz How should such backup script be written? I already figured out the .tar.gz part - just need the naming and the uploading to FTP. I know that FTP is not the most secure way to do it, but as it is non-sensitive data, and FTP is the only option I have, it will do. Edit: I ended up with this script: #!/bin/bash # have some path predefined for backup unless one is provided as first argument BACKUP_DIR="/root/tekkit/world/" TMP_DIR="/tmp/tekkitbackup/" FINISH_DIR="/tmp/tekkitfinished/" # construct name for our archive TIME=$(date +%d-%m-%Y-%H-%M) if [ $1 ]; then BACKUP_DIR="$1" fi echo "Backing up dir ... $BACKUP_DIR" mkdir $TMP_DIR cp -R $BACKUP_DIR $TMP_DIR cd $FINISH_DIR tar czvfp tekkit-$TIME.tar.gz -C $TMP_DIR . # create upload script for lftp cat <<EOF> lftp.upload.script open server user user password lcd $FINISH_DIR mput tekkit-$TIME.tar.gz exit EOF # start backup using lftp and script we created; if all went well print simple message and clean up lftp -f lftp.upload.script && ( echo Upload successfull ; rm lftp.upload.script )

    Read the article

  • Terminal Server 2008 Login: Access Denied

    - by user1236435
    When I try to RDP into a Server 2008 Terminal Server, I get a message that says "Access Denied" and an OK button. I setup the licensing mode correctly (per user) and also have setup to allow all remote connections. I get the following in the security event log: Log Name: Security Source: Microsoft-Windows-Security-Auditing Date: 28/06/2012 12:01:16 Event ID: 4656 Task Category: File System Level: Information Keywords: Audit Failure User: N/A Computer: 0BraApps1.brenntagLA.hou Description: A handle to an object was requested. Subject: Security ID: BRENNTAGLA\jaadmin Account Name: jaadmin Account Domain: BRENNTAGLA Logon ID: 0xbbe3f Object: Object Server: Security Object Type: File Object Name: C:\Windows\System32\ServerManager.msc Handle ID: 0x0 Process Information: Process ID: 0x60c Process Name: C:\Windows\System32\mmc.exe Access Request Information: Transaction ID: {00000000-0000-0000-0000-000000000000} Accesses: READ_CONTROL SYNCHRONIZE WriteData (or AddFile) AppendData (or AddSubdirectory or CreatePipeInstance) WriteEA ReadAttributes WriteAttributes Access Reasons: READ_CONTROL: Granted by D:(A;;0x1200a9;;;BA) SYNCHRONIZE: Granted by D:(A;;0x1200a9;;;BA) WriteData (or AddFile): Not granted AppendData (or AddSubdirectory or CreatePipeInstance): Not granted WriteEA: Not granted ReadAttributes: Granted by ACE on parent folder D:(A;;0x1301bf;;;BA) WriteAttributes: Not granted Access Mask: 0x120196 Privileges Used for Access Check: - Restricted SID Count: 0 Event Xml: 4656 1 0 12800 0 0x8010000000000000 1535565 Security 0BraApps1.brenntagLA.hou S-1-5-21-205301047-3902605089-2438454170-21511219 jaadmin BRENNTAGLA 0xbbe3f Security File C:\Windows\System32\ServerManager.msc 0x0 {00000000-0000-0000-0000-000000000000} %%1538 %%1541 %%4417 %%4418 %%4420 %%4423 %%4424 %%1538: %%1801 D:(A;;0x1200a9;;;BA) %%1541: %%1801 D:(A;;0x1200a9;;;BA) %%4417: %%1805 %%4418: %%1805 %%4420: %%1805 %%4423: %%1811 D:(A;;0x1301bf;;;BA) %%4424: %%1805 0x120196 - 0 0x60c C:\Windows\System32\mmc.exe Any ideas?

    Read the article

  • dns in a small network with router and AD domain

    - by Felix
    I have a small office network with router (running OpenWRT), Windows Domain Controller (used to be 2008R2; I just backed it up and upgraded to 2012), about a dozen AD clients (3 server and windows workstation) and several non-AD clients (network printer, PBX). The problem is that the clients can't access servers by name (only by IP). I tried all kind of permutations. Right now domain controller runs DNS server for all desktops; but unless I put an entry in hosts file - I can only get by IP. I have router as DHCP server (since not all devices are on AD); and except for Domain Controller all IP addresses, including "static", are assigned by the router. Most frustrating, some servers sometimes just work! for example, I can often get to the Linux box by name (it is part of Domain using Beyond Trust Integration Services); but I can never get to SQL Server box. Seems like non-domain devices see more names than domain members... This network should be fairly typical; but I couldn't get any guidance about how to set up DNS/DHCP service to make all nodes happy. The closest is this question, but still it's different! Thanks

    Read the article

  • How to get Synergy working on Ubuntu 11.10 and Windows 7?

    - by Linda
    I'm using Ubuntu 11.10 32-bit and Windows 7 64-bit, however, Synergy only works when a window (application or folder) is open and touching the edge of the screen where the mouse should "jump". In other words, if a window is open and maximized, Synergy works normally. Without any windows, the mouse does not jump to the other screen. My steps: (Ubuntu) apt-get install -y quicksynergy (Windows) Install Synergy (I've tried both 1.3.8 and 1.4.8 and both 32 and 64-bit) On Ubuntu 11.10 32-bit (Synergy Server config): ~/.quicksynergy/synergy.conf section: screens myubuntu: mywin7: end section: links myubuntu: right = mywin7 mywin7: left = myubuntu end On Ubuntu 11.10 32-bit: $ /usr/bin/synergys -f --config .quicksynergy/synergy.conf ... 2012-04-25T14:04:12 NOTE: client "mywin7" has connected /build/buildd/synergy-1.3.6/lib/server/CServer.cpp,287 (output hangs here) On Windows 7 64-bit: Synergy 1.3.8 Client on Microsoft Windows 7 x86 (WOW64) started client connecting to 'myubuntu': ###.###.###.###:24800 connected to server (output hangs here) At this point, things should work, but my mouse still can't change screens unless a window is maximized on my Ubuntu machine. Everything is running on port 24800. No firewall on Ubuntu. Firewall port 24800 open on Windows 7. This was previously working on Ubuntu 10.10 and Windows 7 (so only Ubuntu has been upgraded). I'm open to using either 32 or 64-bit on either server or client side, but I just want to get it working on Ubuntu 11.10 and Windows 7! I'm also using Ubuntu Classic (no effects), and not Unity.

    Read the article

  • Determining the Source of a Given File System Mount on Unix [migrated]

    - by phobos51594
    Background Recently I have run into a bit of a snag on my home FreeBSD server. I recently upgraded it to the latest stable release, and I have noticed some strange behavior with the /var partition. Originally, I had the system configured such that /var had its own partition with /var/run and /var/log in memory disks (/tmp, too). After the upgrade, I notice there is a new, fourth memory disk mounting directly to /var that I had not set up manually and is not in my fstab. It is only 28 megs or so in size and is causing problems when trying to update my ports collection. The ramdisk mounts atuomagically at boot and cannot be unmounted while in multi-user mode. If I drop to single user mode, I am able to unmount it without issue, however rebooting causes it to pop right back up. System specifications have been included at the end of the post. Question Is there any way to determine exactly what is mounting a given memory disk (or any filesystem, for that matter) after it has been mounted? Alternately, does anybody have any ideas what might have caused the new /var ramdisk to pop up? System Specification # uname -a FreeBSD sarge 9.1-PRERELEASE FreeBSD 9.1-PRERELEASE #0: Thu Nov 22 14:02:13 PST 2012 donut@sarge:/usr/obj/usr/src/sys/GENERIC i386 # df Filesystem 1K-blocks Used Avail Capacity Mounted on /dev/da0s1a 515612 410728 63636 87% / devfs 1 1 0 100% /dev /dev/da0s1d 515612 287616 186748 61% /var /dev/da0s1e 6667808 2292824 3841560 37% /usr /dev/md0 63004 32 57932 0% /tmp /dev/md1 3484 8 3200 0% /var/run /dev/md2 31260 8 28752 0% /var/log /dev/md3 31260 512 28248 2% /var <-- This # cat /etc/fstab # Device Mountpoint FStype Options Dump Pass# /dev/da0s1a / ufs rw,noatime 1 1 /dev/da0s1d /var ufs rw,noatime 2 2 /dev/da0s1e /usr ufs rw,noatime 2 2 md /tmp mfs rw,-s64M,noatime 0 0 md /var/run mfs rw,-s4M,noatime 0 0 md /var/log mfs rw,-s32M,noatime 0 0 Thank you in advance for any assistance.

    Read the article

  • illegitimate traffic from user agent Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.10) Gecko/2009042316 Firefox/3.0.10 (.NET CLR 3.5.30729)

    - by user114293
    Since the beginning of the year, I'm getting a lot of traffic with the user agent Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.10) Gecko/2009042316 Firefox/3.0.10 (.NET CLR 3.5.30729). My access logs show 40% - 60% from that user agent. That's strange because the user agent states a Firefox 3.0.10 browser (is anybody using that browser in 2012? Definitely not 40%-60% of visitors on a normal website). Also, the logs show that this user agent only requested the HTML document and no referenced assets like images, css, js files. I checked the IPs of those requests (with that UA). It's coming from all over the world. I recognized that those IPs sometimes have a mobile user agent. So my suspicion is a mobile app that is doing a lot of "spider requests" - but if that would be the case than other web sites should have the same problem. That's actually my question: Does anybody experience same/similar problems?

    Read the article

  • why i cannot download jdk from oracle web site directly without AuthParam?

    - by hugemeow
    that is download with the following command, why it fails to download that file? wget http://download.oracle.com/otn-pub/java/jdk/6u35-b10/jdk-6u35-linux-i586.bin the following command works, but that AuthParam may not work after a while, why? wget http://download.oracle.com/otn-pub/java/jdk/6u35-b10/jdk-6u35-linux-i586.bin?AuthParam=1346955572_27e44512fe8ef5cb920c4c329e5f0fd8 how this AuthParam option is implemented? why i cannot download without this parameter? and why i can only get this parameter using explorer? is rewrite used in the oracle server when deal with wget request? why the same command not works after an hour, does the value of AuthParam expired? so how the server check whether the value of AuthParam is expired? wget http://download.oracle.com/otn-pub/java/jdk/6u35-b10/jdk-6u35-linux-i586.bin?AuthParam=1346955572_27e44512fe8ef5cb920c4c329e5f0fd8 --2012-09-07 03:51:01-- http://download.oracle.com/otn-pub/java/jdk/6u35-b10/jdk-6u35-linux-i586.bin?AuthParam=1346955572_27e44512fe8ef5cb920c4c329e5f0fd8 Resolving download.oracle.com... 23.67.251.50, 23.67.251.57 Connecting to download.oracle.com|23.67.251.50|:80... connected. HTTP request sent, awaiting response... 403 Forbidden 2012-09-07 03:51:01 ERROR 403: Forbidden. @KJ-SRS is that kind of CGI program which is used to judge if AuthParam is right? is that possible to download jdk package purely using wget command, and no need to get that AuthParam in explorer

    Read the article

  • ubuntu dmidecode is not functioning properly

    - by Alaa Alomari
    dmidecode is giving irrelevant and conflicted results. it shows that i have two slots while the correct is 8 (the board is Tyan S5350.) uname -a Linux synd01 3.0.0-16-server #29-Ubuntu SMP Tue Feb 14 13:08:12 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux root@synd01:/home/badmin# dmidecode -t 16 dmidecode 2.9 SMBIOS 2.33 present. Handle 0x0011, DMI type 16, 15 bytes Physical Memory Array Location: System Board Or Motherboard Use: System Memory Error Correction Type: None Maximum Capacity: 4 GB Error Information Handle: Not Provided Number Of Devices: 2 while root@synd01:/home/badmin# dmidecode -t 17 | grep Size Size: No Module Installed Size: No Module Installed Size: 1024 MB Size: 1024 MB Size: No Module Installed Size: No Module Installed Size: 1024 MB Size: 1024 MB also lshw shows: *-memory description: System Memory physical id: 11 slot: System board or motherboard size: 4GiB *-bank:0 description: DIMM DDR Synchronous 166 MHz (6.0 ns) [empty] physical id: 0 slot: J3B1 clock: 166MHz (6.0ns) *-bank:1 description: DIMM DDR Synchronous 166 MHz (6.0 ns) [empty] physical id: 1 slot: J3B3 clock: 166MHz (6.0ns) *-bank:2 description: DIMM DDR Synchronous 166 MHz (6.0 ns) physical id: 2 slot: J2B2 size: 1GiB width: 64 bits clock: 166MHz (6.0ns) *-bank:3 description: DIMM DDR Synchronous 166 MHz (6.0 ns) physical id: 3 slot: J2B4 size: 1GiB width: 64 bits clock: 166MHz (6.0ns) *-bank:4 description: DIMM DDR Synchronous 166 MHz (6.0 ns) [empty] physical id: 4 slot: J3B2 clock: 166MHz (6.0ns) *-bank:5 description: DIMM DDR Synchronous 166 MHz (6.0 ns) [empty] physical id: 5 slot: J2B1 clock: 166MHz (6.0ns) *-bank:6 description: DIMM DDR Synchronous 166 MHz (6.0 ns) physical id: 6 slot: J2B3 size: 1GiB width: 64 bits clock: 166MHz (6.0ns) *-bank:7 description: DIMM DDR Synchronous 166 MHz (6.0 ns) physical id: 7 slot: J1B1 size: 1GiB width: 64 bits clock: 166MHz (6.0ns) what might cause this conflict and how can i fix it? Thanks

    Read the article

  • Removing/modifying LDAP objectclasses/attributes using olc

    - by Foezjie
    I'm having trouble using openldap's olc to modify a schema without shutting down the server. To test some things out, I made the following schema: objectIdentifier tests orgUlyssisOID:4 objectIdentifier testAttribute tests:1 objectIdentifier testObjectClass tests:2 attributeType ( testAttribute:1 NAME 'attr1' DESC 'attribuut 1' SYNTAX '1.3.6.1.4.1.1466.115.121.1.40' ) attributeType ( testAttribute:2 NAME 'attr2' DESC 'attribuut 2' SUP userPassword SINGLE-VALUE ) objectclass ( testObjectClass:1 NAME 'class1' DESC 'objectclass 1' SUP top STRUCTURAL MUST (attr1 $ attr2 ) ) And added it to a new schema called test. (cn={9}test.ldif in cn=schema). Now I can't seem to figure out how to delete class1 from that schema. I use the following LDIF (and tried lots of variations too, to no avail) dn : cn={9}test,cn=schema,cn=config changetype: modify delete: olcObjectClasses olcObjectClasses: ( testObjectClass:1 NAME 'class1' DESC 'objectclass 1' SUP top STRUCTURAL MUST ( attr1 $ attr2 ) ) Running ldapmodify -x -W -D cn=admin,cn=config -f test.ldif -d 0 gives no output. -d 1 gives this: ldap_create ldap_sasl_bind ldap_send_initial_request ldap_new_connection 1 1 0 ldap_int_open_connection ldap_connect_to_host: TCP localhost:389 ldap_new_socket: 4 ldap_prepare_socket: 4 ldap_connect_to_host: Trying 127.0.0.1:389 ldap_pvt_connect: fd: 4 tm: -1 async: 0 ldap_open_defconn: successful ldap_send_server_request ber_scanf fmt ({it) ber: ber_scanf fmt ({i) ber: ber_flush2: 38 bytes to sd 4 ldap_result ld 0x7f2a8ccf3430 msgid 1 wait4msg ld 0x7f2a8ccf3430 msgid 1 (infinite timeout) wait4msg continue ld 0x7f2a8ccf3430 msgid 1 all 1 ** ld 0x7f2a8ccf3430 Connections: * host: localhost port: 389 (default) refcnt: 2 status: Connected last used: Mon Sep 10 11:29:57 2012 ** ld 0x7f2a8ccf3430 Outstanding Requests: * msgid 1, origid 1, status InProgress outstanding referrals 0, parent count 0 ld 0x7f2a8ccf3430 request count 1 (abandoned 0) ** ld 0x7f2a8ccf3430 Response Queue: Empty ld 0x7f2a8ccf3430 response count 0 ldap_chkResponseList ld 0x7f2a8ccf3430 msgid 1 all 1 ldap_chkResponseList returns ld 0x7f2a8ccf3430 NULL ldap_int_select read1msg: ld 0x7f2a8ccf3430 msgid 1 all 1 ber_get_next ber_get_next: tag 0x30 len 12 contents: read1msg: ld 0x7f2a8ccf3430 msgid 1 message type bind ber_scanf fmt ({eAA) ber: read1msg: ld 0x7f2a8ccf3430 0 new referrals read1msg: mark request completed, ld 0x7f2a8ccf3430 msgid 1 request done: ld 0x7f2a8ccf3430 msgid 1 res_errno: 0, res_error: <>, res_matched: <> ldap_free_request (origid 1, msgid 1) ldap_parse_result ber_scanf fmt ({iAA) ber: ber_scanf fmt (}) ber: ldap_msgfree ldap_free_connection 1 1 ldap_send_unbind ber_flush2: 7 bytes to sd 4 ldap_free_connection: actually freed So no real indication of an error. Where am I doing it wrong? Bonus question: If I have some entries of a certain objectclass, can I modify it (add/remove attributeTypes) without removing the entries? Thanks in advance for all help.

    Read the article

  • Apache httpd processes and PHP out of memory

    - by Ofri
    I have a VPS running apache-php-mysql on centos and a single drupal website installed. The VPS has 256MB of RAM (could be the root cause of all my problems... maybe I just need more). Whenever I try to open my website from multiple browser tabs (about 8... not 800) all at once, apache crashes! I have this on the log: [Wed Oct 24 11:26:31 2012] [error] [client xxx] PHP Fatal error: Out of memory (allocated 28049408) (tried to allocate 201335 bytes) in xxx on line 2139, referer: xxx I have read many many posts here, but I think there is something fundamental that I'm missing - If I understand correctly some php script tried to allocate 200K after allocating 28MB, and fails to do so. First question is: should this cause the apache to crash??? Next, I tried to look at 'top' command while I do my little test. Indeed I see 7 httpd processes, each reserving about 30MB - which explains why my RAM runs out. How do I prevent apache from creating new processes until it's out of memory? I tried configuring /etc/httpd/conf/httpd.conf like this: <IfModule prefork.c> StartServers 1 MinSpareServers 1 MaxSpareServers 1 ServerLimit 1 MaxClients 1 MaxRequestsPerChild 100 </IfModule> But got the same exact result! What am I missing? Thanks a lot!

    Read the article

  • Apache serving empty gzip with assets produced by Rails Asset Pipeline

    - by PizzaPill
    I followed the steps described on the blogpost The Asset Pipeline, from development to production and tweaked them to my environment. The two important files are: /etc/apache/site-available/example.com <VirtualHost *:80> ServerName example.com ServerAlias www.example.com DocumentRoot "/var/www/sites/example.com/current/public" ErrorLog "/var/log/apache2/example.com-error_log" CustomLog "/var/log/apache2/example.com-access_log" common <Directory "/var/www/sites/example.com/current/public"> Options All AllowOverride All Order allow,deny Allow from all </Directory> <Directory "/var/www/sites/example.com/current/public/assets"> AllowOverride All </Directory> <LocationMatch "^/assets/.*$"> Header unset Last-Modified Header unset ETag FileETag none ExpiresActive On ExpiresDefault "access plus 1 year" </LocationMatch> RewriteEngine On # Remove the www RewriteCond %{HTTP_HOST} ^www.example.com$ [NC] RewriteRule ^(.*)$ http://example.com/$1 [R=301,L] </VirtualHost> /var/www/sites/example.com/shared/assets/.htaccess RewriteEngine on RewriteCond %{HTTP:Accept-Encoding} \b(x-)?gzip\b RewriteCond %{REQUEST_FILENAME}.gz -s RewriteRule ^(.+) $1.gz [L] <FilesMatch \.css\.gz$> ForceType text/css Header set Content-Encoding gzip </FilesMatch> <FilesMatch \.js\.gz$> ForceType text/javascript Header set Content-Encoding gzip </FilesMatch> But apache seems to send empty gzip files because the testsite looses all styles and firebug doesnt find any content for the css files. Altough if I call the assets-path directly I get some gibberish that looks like binary data. If I move the htaccess-file everything is back to normal. How could I find out where/what went wrong or do you have any suggestions what error I made? > apache2 -v System: Server version: Apache/2.2.14 (Ubuntu) Server built: Mar 5 2012 16:42:17 > uname -a Linux node0 2.6.18-028stab094.3 #1 SMP Thu Sep 22 12:47:37 MSD 2011 x86_64 GNU/Linux

    Read the article

  • Can't find gnutls ibrary when executing rpmbuild under non-root

    - by Rilindo
    I am trying to build ntgs from the latest source, using the .spec from rpmforge - as non-root via rpmbuild. During the compile, it fails at this step: checking for GNUTLS... no configure: error: ntfsprogs crypto code requires the gnutls library. error: Bad exit status from /var/tmp/rpm-tmp.78913 (%build) However, I can compile it successfully outside of rpmbuild. So it sounds like it just the matter of library being seen during the build. However, I can confirm that rpmbuild can see the library that gnutls resides: [foo@bar ~]$ rpmbuild -E '%{_libdir}' rpmbuild/SPECS/ntfsprogs.spec /usr/lib Library location: [foo@bar ntfs-3g_ntfsprogs-2012.1.15]$ /sbin/ldconfig -p | grep -i gnutls libgnutls.so.13 (libc6) => /usr/lib/libgnutls.so.13 libgnutls.so (libc6) => /usr/lib/libgnutls.so libgnutls-openssl.so.13 (libc6) => /usr/lib/libgnutls-openssl.so.13 libgnutls-openssl.so (libc6) => /usr/lib/libgnutls-openssl.so libgnutls-extra.so.13 (libc6) => /usr/lib/libgnutls-extra.so.13 libgnutls-extra.so (libc6) => /usr/lib/libgnutls-extra.so What would cause the problem of the library not being seen when you build a RPM? EDIT: Oh yeah, I am running Centos 5.5.

    Read the article

  • Host name change breaking http? Fedora

    - by Dave
    OK so I have been messing around on my development server. It has been a while since I have had my head in linux and I suspect I have broken something. I have SSH running and that is working fine. I also have HTTP and I had FTP running also. Earlier today I decided I wanted to rename the machine so I updated the /etc/hosts file and /etc/sysconfig/network. I also changed the server name in the httpd.conf. I rebooted the machine and reconnected to SSH fine. Later I was messing around with the FTP service (trying to tighten up the user security) and when i tried to connect remotely to FTP no joy, it said cannot connect. I thought that was weird but had planned to remove ftp as we will be using github so removed ftp and moved on. Then I tried to connect to the website but major fail. even connecting to the IP address is failing. I used lynx to connect to the localhost and there was my site so something going on at server level. I thought maybe something up with iptables but I have not changed them but tried adding http but still no joy. I have a - Fedora release 17 (Beefy Miracle) NAME=Fedora VERSION="17 (Beefy Miracle)" ID=fedora VERSION_ID=17 PRETTY_NAME="Fedora 17 (Beefy Miracle)" ANSI_COLOR="0;34" CPE_NAME="cpe:/o:fedoraproject:fedora:17" Fedora release 17 (Beefy Miracle) Fedora release 17 (Beefy Miracle) Linux version 3.3.4-5.fc17.x86_64 ([email protected]) (gcc version 4.7.0 20120504 (Red Hat 4.7.0-4) (GCC) ) #1 SMP Mon May 7 17:29:34 UTC 2012 This is my iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT icmp -- anywhere anywhere ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh REJECT all -- anywhere anywhere reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT) target prot opt source destination REJECT all -- anywhere anywhere reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT) target prot opt source destination Like I say I can use SSH no issue but http although running is a no go from a remote computer. Any ideas?

    Read the article

  • How can I update generic non-pnp monitor?

    - by njk
    Background I've been running a KVM switch with my monitor at 1920 x 1080 over VGA for over a year. Did a Windows Update on 12/11/12 which did the following: Update for Windows 7 for x64-based Systems (KB2779562) Security Update for Windows 7 for x64-based Systems (KB2779030) Cumulative Security Update for Internet Explorer 8 for Windows 7 for x64-based Systems (KB2761465) Windows Malicious Software Removal Tool x64 - December 2012 (KB890830) Security Update for Windows 7 for x64-based Systems (KB2753842) Security Update for Windows 7 for x64-based Systems (KB2758857) Security Update for Windows 7 for x64-based Systems (KB2770660) After a restart, my extended monitor was dark. I attempted to reset the extended display configuration, and noticed my monitor was being detected as a Generic Non-PnP Monitor: I uninstalled, downloaded new, and re-installed display drivers. Nothing. I attempted to unplug my monitor from the power for 15 minutes. Nothing. I followed some of the suggestions on this thread; specifically DanM's which suggested to create a new *.inf file and replace that in Device Manager. Device Manager said the "best driver software for your device is already installed". The only thing that works is when the monitor is directly attached to the laptop. This obviously is not what I want. My thought is to somehow remove the Generic Non-PnP Monitor from registry. How would I accomplish this and would this help? Any other suggestions? Relevant Hardware ASUS VE276 Monitor TRENDnet 2-Port USB KVM Switch (TK-207K) HP Laptop w/ ATI Radeon HD 4200 Screens

    Read the article

  • linux "date -s" command not working to change date on a server

    - by nelaar
    date +%T --set="12:19:06" 12:19:06 date Mon Nov 26 12:37:32 SAST 2012 I have tried many different forms of this command but nothing seams to work. In changing the date on this computer server running as VM is not working. Our messages log show messages like thise ntpd[3496]: time correction of -1098 seconds exceeds sanity limit (1000); set clock manually to the correct UTC time. Our server is now about 20 minutes out. It seams like our server has not been updating the time correctly for a few days. Nov 22 19:29:23 hostname ntpd[1818]: time reset -998.577519 s Nov 22 19:32:34 hostname ntpd[1818]: synchronized to LOCAL(0), stratum 10 Nov 22 19:33:39 hostname ntpd[1818]: synchronized to 41.134.20.28, stratum 1 Nov 22 19:52:30 hostname ntpd[1818]: time reset -998.992426 s Nov 22 19:55:47 hostname ntpd[1818]: synchronized to LOCAL(0), stratum 10 Nov 22 19:56:53 hostname ntpd[1818]: synchronized to 41.134.20.28, stratum 1 Nov 22 20:13:04 hostname ntpd[1818]: time reset -999.374412 s Nov 22 20:16:40 hostname ntpd[1818]: synchronized to LOCAL(0), stratum 10 Nov 22 20:17:44 hostname ntpd[1818]: synchronized to 41.134.20.28, stratum 1 Nov 22 20:32:02 hostname ntpd[1818]: time reset -999.716832 s Nov 22 20:35:28 hostname ntpd[1818]: synchronized to LOCAL(0), stratum 10 Nov 22 20:36:16 hostname ntpd[1818]: synchronized to 41.134.20.28, stratum 1 Nov 22 20:56:39 hostname ntpd[1818]: time correction of -1000 seconds exceeds sanity limit (1000); set clock manually to the correct UTC time.

    Read the article

  • Why Wireshark does not recognize this HTTP response?

    - by Alois Mahdal
    I have a trivial CGI script that outputs simple text content. It's written in Perl and using CGI module and it specifies only the most basic headers: print $q->header( -type => 'text/plain', -Content_length => $length, ); print $stuff; There's no apparent issue with functionality, but I'm confused about the fact that Wireshark does not recognize the HTTP response as HTTP--it's marked as TCP. Here is request and response: GET /cgi-bin/memfile/memfile.pl?mbytes=1 HTTP/1.1 Host: 10.6.130.38 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:11.0) Gecko/20100101 Firefox/11.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: cs,en-us;q=0.7,en;q=0.3 Accept-Encoding: gzip, deflate Connection: keep-alive HTTP/1.1 200 OK Date: Thu, 05 Apr 2012 18:52:23 GMT Server: Apache/2.2.15 (Win32) mod_ssl/2.2.15 OpenSSL/0.9.8m Content-length: 1048616 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/plain; charset=ISO-8859-1 XXXXXXXX... And here is the packet overview (Full packet is here on pastebin) No. Time Source srcp Destination dstp Protocol Info tcp.stream abstime 5 0.112749 10.6.130.38 80 10.6.130.53 48072 TCP [TCP segment of a reassembled PDU] 0 20:52:23.228063 Frame 5: 1514 bytes on wire (12112 bits), 1514 bytes captured (12112 bits) Ethernet II, Src: Dell_97:29:ac (00:1e:4f:97:29:ac), Dst: Dell_3b:fe:70 (00:24:e8:3b:fe:70) Internet Protocol Version 4, Src: 10.6.130.38 (10.6.130.38), Dst: 10.6.130.53 (10.6.130.53) Transmission Control Protocol, Src Port: http (80), Dst Port: 48072 (48072), Seq: 1, Ack: 330, Len: 1460 Now when I see this in Wireshark: there's usual TCP handshake then the GET request shown as HTTP with preview then the next packet contains the response, but is not marked as an HTTP response--just a generic "[TCP segment of a reassembled PDU]", and is not caught by "http.response" filter. Can somebody explain why Wireshark does not recognize it? Is there something wrong with the response?

    Read the article

  • Bacula virtual backup job doesn't run, no output?

    - by Zoredache
    I am trying to get Virtual Backups working, but when I try to run a virtual backup job, it appears to get created, but then never seems to actually run. I have a full, and a couple incremental backups. status director JobId Level Files Bytes Status Finished Name ==================================================================== 1283 Full 10,565 1.963 G OK 21-Dec-12 09:47 nms-Job 1284 Incr 314 129.6 M OK 21-Dec-12 09:49 nms-Job 1285 Incr 230 147.2 M OK 21-Dec-12 09:51 nms-Job 1288 Incr 525 138.8 M OK 21-Dec-12 11:25 nms-Job I attempt to start a job from bconsole like this. *run job=nms-Job level=VirtualFull Using Catalog "MySQL" Run Backup job JobName: nms-Job Level: VirtualFull Client: nms-FileDaemon FileSet: nms-FileSet Pool: nms-pool (From Job resource) Storage: File_d1 (From Pool resource) When: 2012-12-21 13:07:54 Priority: 10 OK to run? (yes/mod/no): Job queued. JobId=1291 Then my new job, just sits there, doing nothing. The JobStatus shows that the job was created, but it appears to never run? All the full, and incremental backups are terminating normally. *llist jobid=1291 JobId: 1,291 Job: nms-Job.2012-12-21_13.07.56_07 Name: nms-Job PurgedFiles: 0 Type: B Level: F ClientId: 4 Name: nms-FileDaemon JobStatus: C SchedTime: 2012-12-21 13:07:54 StartTime: 2012-12-21 13:07:56 EndTime: 0000-00-00 00:00:00 RealEndTime: 0000-00-00 00:00:00 JobTDate: 1,356,124,076 VolSessionId: 0 VolSessionTime: 0 JobFiles: 0 JobErrors: 0 JobMissingFiles: 0 PoolId: 19 PooLname: nms-pool PriorJobId: 0 FileSetId: 11 FileSet: nms-FileSet I am getting very frustrated, that this isn't working, mostly because it isn't giving me any error logs, or output at all. I submit the job, and as far as I can tell nothing happens. Is there some status, or debugging level that I can set to get a useful information about why this isn't working? What can I do to make this work? I was originally running Bacula 5.0.2 on Debian Squeeze, out of frustration, I upgraded to the 5.2.6 in the backports repository, hoping that a new version might give me better results.

    Read the article

  • Apache reverse proxy with VirtualHost not serving a page

    - by Mr Aleph
    I have an Apache reverse proxy set to move requests to a Tomcat Applet. The config is similar to: <VirtualHost 100.100.100.100:80> ProxyPass /AppName/App http://1.1.1.1/AppName/App ProxyPassReverse /AppName/App http://1.1.1.1/AppName/App </VirtualHost> I also have a page called summary.html that exists on 1.1.1.1 as: http://1.1.1.1/AppName/summary.html When I browse directly to it I have no problem viewing it, however if I try to get there via the reverse proxy I get a blank page. Wireshark shows me a 503, but this one is coming from the Apache reverse proxy (IP 100.100.100.100) and not the Tomcat (IP 1.1.1.1). Should I add http://1.1.1.1/AppName/ to the config? How? I tried it but I get a blank page, however this one shows on the URL bar of the browser the internal IP of the Tomcat, so, no go. Help is appreciated. Thanks. EDIT: This is the dump from Wireshark: GET /AppName/ HTTP/1.1 Host: 100.100.100.100 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/534.52.7 (KHTML, like Gecko) Version/5.1.2 Safari/534.52.7 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Cache-Control: max-age=0 Accept-Language: en-us Accept-Encoding: gzip, deflate Connection: keep-alive HTTP/1.1 404 Not Found Date: Tue, 30 Jan 2012 09:08:51 GMT Server: Apache Content-Length: 1 Connection: close Content-Type: text/html; charset=iso-8859-1

    Read the article

  • Mavericks permission issues with Windows Server deduplicated shares

    - by dmohlmaster
    We have a number of 10.9-10.9.3 - Mavericks - machines installed throughout our facility. Much of the user content is pulled from shares stored on our Windows Server 2012 fileservers with deduplication enabled. I have found that files newly written or unoptimized are able to be accessed without issue - read, written, modified, etc. Once the file gets optomized/deduplicated and Windows adds the P & L attributes - sparse and symlink - the Macs running Mavericks begin to have access issues. Once the files get deduplicated, users begin receiving read access errors when copying files (see error1 below). This happens when copying to folders within the current folder tree or copying somewhere to the local system. If you 'stop' the copy operation and retry a few more times, it may eventually work for the specific instance but fail again later. I am however, able to copy these files without issue via the terminal. Other systems running 10.7 do not experience the same issues and are able to access file server resources without issue. Many of the systems having issues are newer and thus not able to be downgraded to 10.8 or 10.7. I have tried finder replacements such as Pathfinder but the results are the same. I know this is at least similar to the issues many Mac users are already experiencing and posting about but I haven't seen it directly linked to deduplication and the attributes written by Windows server. Has anyone seen this issue? Have any solutions been found? Error 1: When copying files after the PL attributes have been set by deduplication. "One or more items can't be copied to "Foler" because you don't have permissions to read them. ******************************************' Via the system.log, I am also seeing the following error when accessing these deduplicated file shares. The reparse point tag listed below is "IO_REPARSE_TAG_DEDUP" Reported error: "smbfs_nget: filename.ext - unknown reparse point tag 0x80000013"

    Read the article

< Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >