Search Results

Search found 17950 results on 718 pages for 'directory listing'.

Page 680/718 | < Previous Page | 676 677 678 679 680 681 682 683 684 685 686 687  | Next Page >

  • Toshiba Satellite L630 broken after bios update

    - by Mustafa Kamal
    I have Toshiba Satellite L630, which has been broken. It had no more OS installed in it. All the disk partition were cleared into one single empty unformatted partition. So I begin to install windows XP on this laptop. Apparently, win XP's driver support for this laptop is very limited. So I have to find almost all important driver (display, sound, etherned, wireless etc) on the net and install it manually one by one. So I start googling, and I got some driver download page from several Toshiba's website (the global version, the europe, asia, etc). Pretty hard to find the exact drivers, but I managed to find pretty good drivers. It's all works quite fine, although still have a few glitches. But everything turned into a big mess when I downloaded the "BIOS Update", which is also listed on Toshiba's official driver directory site. When I installed it, it show a big red warning sign telling me not to do anything while flashing the BIOS . I follow that instruction prudently. The process was finished, and that update BIOS software (it is InsydeH2O BIOS) told me that the BIOS has been succesfully updated and the computer need to restart. So I restart the computer. This is where the problem appear. I can no longer boot to my laptop. The booting process seems to be able to enter windows for a moment (it shows the windows XP loading screen), and then suddenly it just got that hateful blue screen and then instantiy restarts the machine. It goes on a loop. Boot bios - enter XP - blue screen - restart. I can't even try to reinstall my win XP again. Evertime the machine tries to boot to win XP CD, it got the same blue screen as I gets when loading from HDD. Many google search results said that I should open the laptop cover and try to clear CMOS with some kind of jumper or something. Or to unplug/re-plug the CMOS battery. Do I really need to do that? Is there anyway I could do without disassembling my laptop? I read some tricks about booting from USB device but I can't get the exat tools that I need to do that thing... Btw, this is my detailed laptop number photographed from the back of my laptop

    Read the article

  • Installing Phusion Passenger 4.0.20 on Ubuntu 13.10

    - by tempestfire2002
    So I'm trying to install Passenger on the newest version of KUbuntu (13.10). I installed Apache2 using the apache2-mpm-worker package using the Muon Package Manager. And these are the commands I ran. rvmsudo gem install passenger rvmsudo passenger-install-apache2-module But I keep getting the following errors: [Fri Oct 18 15:52:13.227790 2013] [core:warn] [pid 13095] AH00111: Config variable ${APACHE_LOCK_DIR} is not defined [Fri Oct 18 15:52:13.227933 2013] [core:warn] [pid 13095] AH00111: Config variable ${APACHE_PID_FILE} is not defined [Fri Oct 18 15:52:13.227969 2013] [core:warn] [pid 13095] AH00111: Config variable ${APACHE_RUN_USER} is not defined [Fri Oct 18 15:52:13.227991 2013] [core:warn] [pid 13095] AH00111: Config variable ${APACHE_RUN_GROUP} is not defined [Fri Oct 18 15:52:13.228026 2013] [core:warn] [pid 13095] AH00111: Config variable ${APACHE_LOG_DIR} is not defined [Fri Oct 18 15:52:13.231737 2013] [core:warn] [pid 13095:tid 3074562624] AH00111: Config variable ${APACHE_RUN_DIR} is not defined [Fri Oct 18 15:52:13.232760 2013] [core:warn] [pid 13095:tid 3074562624] AH00111: Config variable ${APACHE_LOG_DIR} is not defined [Fri Oct 18 15:52:13.233043 2013] [core:warn] [pid 13095:tid 3074562624] AH00111: Config variable ${APACHE_LOG_DIR} is not defined [Fri Oct 18 15:52:13.233078 2013] [core:warn] [pid 13095:tid 3074562624] AH00111: Config variable ${APACHE_LOG_DIR} is not defined AH00526: Syntax error on line 74 of /etc/apache2/apache2.conf: Invalid Mutex directory in argument file:${APACHE_LOCK_DIR} -------------------------------------------- WARNING: Apache doesn't seem to be compiled with the 'prefork', 'worker' or 'event' MPM Phusion Passenger has only been tested on Apache with the 'prefork', the 'worker' and the 'event' MPM. Your Apache installation is compiled with the '' MPM. We recommend you to abort this installer and to recompile Apache with either the 'prefork', the 'worker' or the 'event' MPM. Press Ctrl-C to abort this installer (recommended). Press Enter if you want to continue with installation anyway. The result of my running apache2ctl -V is: Server version: Apache/2.4.6 (Ubuntu) Server built: Aug 9 2013 14:31:04 Server's Module Magic Number: 20120211:23 Server loaded: APR 1.4.8, APR-UTIL 1.5.2 Compiled using: APR 1.4.8, APR-UTIL 1.5.2 Architecture: 32-bit Server MPM: worker threaded: yes (fixed thread count) forked: yes (variable process count) Server compiled with.... -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=256 -D HTTPD_ROOT="/etc/apache2" -D SUEXEC_BIN="/usr/lib/apache2/suexec" -D DEFAULT_PIDLOG="/var/run/apache2.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="mime.types" -D SERVER_CONFIG_FILE="apache2.conf" As can be seen, the server is compiled with the worker MPM, so why is passenger complaining? And how do I solve the above errors (warnings, really, but to be safe, I'd like to not have any warnings)? Thanks.

    Read the article

  • Linux not picking up new partition correctly on emc pseudo device

    - by James
    Hi We have a database server running oracle rac. We were recently running out of space on the main LUN that it is attached to. I created a new 100GB LUN and concatenated this onto the existing LUN creating a new MetaLUN. After some messing I managed to get linux to recognise the new space. I then created a new partition in on the pseudo device, to use the new space. Previously when I have done this on other system the next step is to create an ASM disk on the new partition and add this disk to the oracle disk group. This however fails. I am aware of various issues with ASM and powerpath, but I don't think this is the issue here. As on while investigating the issue I discovered that one of the underlying logical device is not reflecting the size change. See below; Powermt displays all of the underlying logical units [root@XXXXX~]# powermt display dev=emcpowerd Pseudo name=emcpowerd CLARiiON ID=CKM00091500009 [VFRAC2] Logical device ID=6006016030312200787502866C65DE11 [LUN 30] state=alive; policy=CLAROpt; priority=0; queued-IOs=0 Owner: default=SP A, current=SP A Array failover mode: 1 ============================================================================== ---------------- Host --------------- - Stor - -- I/O Path - -- Stats --- ### HW Path I/O Paths Interf. Mode State Q-IOs Errors ============================================================================== 3 qla2xxx sde SP A0 active alive 0 0 3 qla2xxx sdj SP B0 active alive 0 0 4 qla2xxx sdo SP A1 active alive 0 0 4 qla2xxx sdt SP B1 active alive 0 0 Fdisk on the pseudo device shows correct space. [root@XXXXX ~]# fdisk -l /dev/emcpowerd Disk /dev/emcpowerd: 429.4 GB, 429496729600 bytes 255 heads, 63 sectors/track, 52216 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/emcpowerd1 1 39162 314568733+ 83 Linux /dev/emcpowerd2 39163 52216 104856255 83 Linux fdisk on one of the logical units is wrong [root@XXXXX~]# fdisk -l /dev/sde Disk /dev/sde: 322.1 GB, 322122547200 bytes 255 heads, 63 sectors/track, 39162 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sde1 1 39162 314568733+ 83 Linux /dev/sde2 39163 52216 104856255 83 Linux fdisk on the rest of the units is fine [root@XXXXX ~]# fdisk -l /dev/sdj Disk /dev/sdj: 429.4 GB, 429496729600 bytes 255 heads, 63 sectors/track, 52216 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdj1 1 39162 314568733+ 83 Linux /dev/sdj2 39163 52216 104856255 83 Linux Also when I created the the partition linux did not create the any entries in the /dev directory for the second partition so I created these manually [root@XXXXX dev]# mknod sde2 b 8 66 [root@XXXXX dev]# ls -al sd[ejot]? brw-r----- 1 root disk 8, 65 Dec 29 14:20 sde1 brw-r--r-- 1 root disk 8, 66 Apr 8 20:31 sde2 brw-r----- 1 root disk 8, 145 Dec 29 14:19 sdj1 brw-r--r-- 1 root disk 8, 146 Apr 8 20:33 sdj2 brw-r----- 1 root disk 8, 225 Apr 6 23:12 sdo1 brw-r--r-- 1 root disk 8, 226 Apr 8 20:33 sdo2 brw-r----- 1 root disk 65, 49 Dec 29 14:19 sdt1 brw-r--r-- 1 root disk 65, 50 Apr 8 20:33 sdt2 This is a production server that we cannot easily reboot. Any ideas would be much appreciated. J

    Read the article

  • 403 forbidden while submitting a POST request with image data via iPhone application

    - by binnyb
    I am creating an iOS application which allows users to send image/text data to my webserver via a POST request. I am successfully sending POSTS to the server when image data is not included in the request. Any time i POST with image data the server spits back a 403 forbidden. I have tried adding the following to the .htaccess file in the directory of the script with no luck: Options +Indexes FollowSymLinks +ExecCGI Order allow,deny Allow from all web browsers and Android devices can successfully POST with image data to the script, the only device which cannot is the iPhone. POSTING with data to other hosting providers works as expected - it is just this host(ipowerweb.com). i noticed that when i try to POST to ANY script on the server with data returns a 403 forbidden. another note: i can successfully post to another server that is hosted by ipowerweb, but mine cant seem to handle it. My host has tried to resolve the issue but cannot, and they have marked it on their end as "resolved", so no more help from them. I wish to keep this host as moving would be a pain - i will change hosts as a last resort, so please help me! Why am i getting this 403 forbidden error only when i submit data via my iPhone application? How can i resolve the issue so i can successfully POST data? any advice on what i can do would be greatly appreciated. edit: as request, here are the response headers: { Connection = close; "Content-Length" = 217; "Content-Type" = "text/html; charset=iso-8859-1"; Date = "Wed, 12 Jan 2011 19:11:19 GMT"; Server = "Apache/2"; } edit: as request here are the request headers(oops): { "Accept-Encoding" = gzip; "Content-Length" = 5781; "Content-Type" = "multipart/form-data; charset=utf-8; boundary=0xKhTmLbOuNdArY"; "User-Agent" = "YeahIAteThat 1.0 (iPhone; iPhone OS 4.2.1; en_US)"; }

    Read the article

  • BIND split-view DNS config problem

    - by organicveggie
    We have two DNS servers: one external server controlled by our ISP and one internal server controlled by us. I'd like internal requests for foo.example.com to map to 192.168.100.5 and external requests continue to map to 1.2.3.4, so I'm trying to configure a view in bind. Unfortunately, bind fails when I attempt to reload the configuration. I'm sure I'm missing something simple, but I can't figure out what it is. options { directory "/var/cache/bind"; forwarders { 8.8.8.8; 8.8.4.4; }; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; zone "." { type hint; file "/etc/bind/db.root"; }; zone "localhost" { type master; file "/etc/bind/db.local"; }; zone "127.in-addr.arpa" { type master; file "/etc/bind/db.127"; }; zone "0.in-addr.arpa" { type master; file "/etc/bind/db.0"; }; zone "255.in-addr.arpa" { type master; file "/etc/bind/db.255"; }; view "internal" { zone "example.com" { type master; notify no; file "/etc/bind/db.example.com"; }; }; zone "example.corp" { type master; file "/etc/bind/db.example.corp"; }; zone "100.168.192.in-addr.arpa" { type master; notify no; file "/etc/bind/db.192"; }; I have excluded the entries in the view for allow-recursion and recursion in an attempt to simplify the configuration. If I remove the view and just load the example.com zone directly, it works fine. Any advice on what I might be missing?

    Read the article

  • nginx, php-cgi and "No input file specified."

    - by Stephen Belanger
    I'm trying to get nginx to play nice with php-cgi, but it's not quite working how I'd like. I'm using some set variables to allow for dynamic host names--basically anything.local. I know that stuff is working because I can access static files properly, however php files don't work. I get the standard "No input file specified." error which normally occurs when the file doesn't exist, but it definitely does exist and the path is correct because I can access the static files in the same path. It could possibly be a permissions thing, but I'm not sure how that could be an issue. I'm running this on Windows under my own user account, so I think it should have permission unless php-cgi is running under a different user without me telling it to. . Here's my config; worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; gzip on; server { # Listen for HTTP listen 80; # Match to local host names. server_name *.local; # We need to store a "cleaned" host. set $no_www $host; set $no_local $host; # Strip out www. if ($host ~* www\.(.*)) { set $no_www $1; rewrite ^(.*)$ $scheme://$no_www$1 permanent; } # Strip local for directory names. if ($no_www ~* (.*)\.local) { set $no_local $1; } # Define default path handler. location / { root ../Users/Stephen/Documents/Work/$no_local.com/hosts/main/docs; index index.php index.html index.htm; # Route non-existent paths through Kohana system router. try_files $uri $uri/ /index.php?kohana_uri=$request_uri; } # pass PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { root ../Users/Stephen/Documents/Work/$no_local.com/hosts/main/docs; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi.conf; } # Prevent access to system files. location ~ /\. { return 404; } location ~* ^/(modules|application|system) { return 404; } } }

    Read the article

  • Remote installing an msi on citrix servers using WMI

    - by capn
    OK, I'm a C# programmer that is trying to streamline the deployment of a custom windows form app I inherited and built an installer for with WiX (this app will need to be reinstalled regularly as I'm making changes to it). I'm not really used to admin type things (or vbs, or WMI, or terminal servers, or Citrix, and even WiX and MSI are not things I usually deal with) but so far I put together some vbs and have an end goal in mind. The msi does work, and I've installed it from the mapped O: drive on my dev machine and while RDP'd to a citrix machine. End Goal: Deploy code written on my dev machine and compiled into an MSI (that I can improve upon within the confines of WiX and whatever the Windows Installer Engine allows) to the cluster of Citrix machines my users have access to. What am I missing in my script to get the MSI to execute on the remote machines? Layout: Machine A is my dev machine, and has the vbs script and the msi file (XP SP3) Machines C1 - C6 are the Citrix Servers that need the application installed them via the msi (Server 2003 R2 SP2) There is also optionally a shared network resource that all the machines can access. Script: 'Set WMI Constants Const wbemImpersonationLevelImpersonate = 3 Const wbemAuthenticationLevelPktPrivacy = 6 'Set whether this is installing to the debug Citrix Servers Const isDebug = true 'Set MSI location 'Network location yields error 1619 (This installation package could not be opened.) msiLocation = "\\255.255.255.255\odrive\Citrix Deployment\Setup.msi" 'Directory on machine A yields error 3 (file not found) 'msiLocation = "C:\Temp\Deploy\Setup.msi" 'Mapped network drive (on both machines) yield error 3 (file not found) 'msiLocation = "O:\Citrix Deployment\Setup.msi" 'Set login information strDomain = "MyDomain" Wscript.StdOut.Write "user name:" strUser = Wscript.StdIn.ReadLine Set objPassword = CreateObject("ScriptPW.Password") Wscript.StdOut.Write "password:" strPassword = objPassword.GetPassword() 'Names of Citrix Servers Dim citrixServerArray If isDebug Then citrixServerArray = array("C4") Else 'citrixServerArray = array("C1","C2","C3","C5","C6") End If 'Loop through each Citrix Server For Each citrixServer in citrixServerArray 'Login to remote computer Set objLocator = CreateObject("WbemScripting.SWbemLocator") Set objWMIService = objLocator.ConnectServer(citrixServer, _ "root\cimv2", _ strUser, _ strPassword, _ "MS_409", _ "ntlmdomain:" + strDomain) 'Set Remote Impersonation level objWMIService.Security_.ImpersonationLevel = wbemImpersonationLevelImpersonate objWMIService.Security_.AuthenticationLevel = wbemAuthenticationLevelPktPrivacy 'Reference to a process on the machine Dim objProcess : Set objProcess = objWMIService.Get("Win32_Process") 'Change user to install for terminal services errReturn = objProcess.Create _ ("cmd.exe /c change user /install", Null, Null, intProcessID) WScript.Echo errReturn 'Install MSI here 'Reference to a product on the machine Set objSoftware = objWMIService.Get("Win32_Product") 'All users set in option parameter, I'm led to believe that the third parameter is actually ignored 'http://www.webmasterkb.com/Uwe/Forum.aspx/vbscript/2433/Installing-programs-with-VbScript errReturn = objSoftware.Install(msiLocation,"ALLUSERS=2 REBOOT=ReallySuppress",True) Wscript.Echo errReturn 'Change user back to execute errReturn = objProcess.Create _ ("cmd.exe /c change user /execute", Null, Null, intProcessID) WScript.Echo errReturn Next I also tried using this to install, it doesn't return an error code, but doesn't install the msi either, and it makes me wonder if the change user /install command is even really working. errReturn = objProcess.Create _ ("cmd.exe /c msiexec /i ""O:\Citrix Deployment\Setup.msi"" /quiet") Wscript.Echo errReturn

    Read the article

  • Microsoft signed driver appears as publisher not verfied

    - by Priyanka Gupta
    Task at hand: Microsoft sign drivers on Win 7. I microsoft signed my driver package 3 times every time thinking I might have missed a step or something. However, I cannot seem to get rid of the Windows Security error message "Windows can't verify the publisher of this driver software'. This is not the first time I have signed the driver packages. I was successfully able to sign other driver packages a few months ago. However, with this driver package I keep getting Windows security dialog box. Here's the procedure I follow - Create a new cat file using INF2CAT tool. Self sign the driver using a Versign Class 3 Public Primary Certification Authority - G5.cer. Run the microsoft tests on DTM Servers and clients with the devices that use this driver. Create WLK submission package. Self sign the cab file. Submit the package for certification. The catalog file that comes back after successfully passing tests says Name of signer "Microsoft Windows Hardware Comptibility Publisher". When I check the validity of signature using SignTool, it says the signature is vaild. However, when I try to install the driver with new signed catalog file the windows complain. Any ideas? Edit 11/12/2012: Reply to Eugene's comment Thanks for the help, Eugene. Yes. I did sign two other driver packages before. One of them was modified version of WinUSB driver. I am using the same certificate I used when I signed those two driver packages a few months ago. It costs $250 per signing from Microsoft. I would think that Microsoft would complain about it during certification if the certificate is wrong. I use the following command to self sign the CAT file. I don't have to specify the ceritificate name as there's only one certificate in the directory - Signtool sign /v /a /n CompanyName /t http://timestamp.verisign.com/scripts/timestamp.dll OurCatalogFile.cat Below is the result from running Verify command on the Microsoft signed OurCatalogFile.cat C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\x64signtool verify /v "C:\User s\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Verifying: C:\Users\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Hash of file (sha1): BDDF39B1DD95881B462164129758A7FFD54F47D9 Signing Certificate Chain: Issued to: Microsoft Root Certificate Authority Issued by: Microsoft Root Certificate Authority Expires: Sun May 09 18:28:13 2021 SHA1 hash: CDD4EEAE6000AC7F40C3802C171E30148030C072 Issued to: Microsoft Windows Hardware Compatibility PCA Issued by: Microsoft Root Certificate Authority Expires: Thu Jun 04 16:15:46 2020 SHA1 hash: 8D42419D8B21E5CF9C3204D0060B19312B96EB78 Issued to: Microsoft Windows Hardware Compatibility Publisher Issued by: Microsoft Windows Hardware Compatibility PCA Expires: Wed Sep 18 18:20:55 2013 SHA1 hash: D94345C032D23404231DD3902F22AB1C2100341E The signature is timestamped: Tue Nov 06 11:26:48 2012 Timestamp Verified by: Issued to: Microsoft Root Authority Issued by: Microsoft Root Authority Expires: Thu Dec 31 02:00:00 2020 SHA1 hash: A43489159A520F0D93D032CCAF37E7FE20A8B419 Issued to: Microsoft Timestamping PCA Issued by: Microsoft Root Authority Expires: Sun Sep 15 02:00:00 2019 SHA1 hash: 3EA99A60058275E0ED83B892A909449F8C33B245 Issued to: Microsoft Time-Stamp Service Issued by: Microsoft Timestamping PCA Expires: Tue Apr 09 16:53:56 2013 SHA1 hash: 1895C2C907E0D7E5C0292B92C6EA8D0E236F525E Successfully verified: C:\Users\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Number of files successfully Verified: 1 Number of warnings: 0 Number of errors: 0 Thank you!

    Read the article

  • Authenticate by libpam-mysql and libnss-mysql (CentOS)

    - by Chris
    I'm trying to get MySQL to function as a backend for authenticating users on CentOS 6.3. So far I have successfully installed and configured libnss-mysql. I can test this by doing: # groups testuser testuser : sftp Testuser is a member of the sftp group in fact, all MySQL based useraccounts will be hardcoded to it. The sftp group is chrooted and forced to use internal-sftp so they cannot do anything but access their home directory. Then I configured pam-mysql and PAM to allow mysql logins. This also works.. When SELinux is not enforcing. When I do setenforce 1 users can no longer login. Error: Permission denied, please try again. This is my pam_mysql.conf file: users.host=localhost users.db_user=nss-pam-user users.db_passwd=*********** users.database=sftpusers users.table=users users.user_column=username users.password_column=password users.password_crypt=6 verbose=1 My /etc/pam.d/sshd: #%PAM-1.0 auth sufficient pam_sepermit.so auth include password-auth auth required pam_mysql.so config_file=/etc/pam_mysql.conf account sufficient pam_nologin.so account include password-auth account required pam_mysql.so config_file=/etc/pam_mysql.conf password include password-auth # pam_selinux.so close should be the first session rule session required pam_selinux.so close session required pam_loginuid.so # pam_selinux.so open should only be followed by sessions to be executed in the user context session required pam_selinux.so open env_params session optional pam_keyinit.so force revoke session include password-auth And to be complete the contents of some log files.. /var/logs/secure Nov 20 14:52:20 hostname unix_chkpwd[4891]: check pass; user unknown Nov 20 14:52:20 hostname unix_chkpwd[4891]: password check failed for user (testuser) Nov 20 14:52:20 hostname sshd[4880]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.10.107 user=testuser Nov 20 14:52:22 sftpusers sshd[4880]: Failed password for testuser from 192.168.10.107 port 51849 ssh2 /var/logs/audit/audit.log type=USER_AUTH msg=audit(1353420107.070:812): user pid=5285 uid=0 auid=500 ses=24 subj=unconfined_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=pubkey acct="testuser" exe="/usr/sbin/sshd" hostname=? addr=192.168.10.107 terminal=ssh res=failed' type=USER_AUTH msg=audit(1353420112.312:813): user pid=5285 uid=0 auid=500 ses=24 subj=unconfined_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=PAM:authentication acct="testuser" exe="/usr/sbin/sshd" hostname=192.168.10.107 addr=192.168.10.107 terminal=ssh res=failed' type=USER_AUTH msg=audit(1353420112.456:814): user pid=5285 uid=0 auid=500 ses=24 subj=unconfined_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=password acct="testuser" exe="/usr/sbin/sshd" hostname=? addr=192.168.10.107 terminal=ssh res=failed' I tried to let audit2why explain the problem but it remains silent even though there are some errors. Does anyone see the problem? Thanks! EDIT: Turns out it's almost working with setenforce 0 I can mkdir foobar but if I do a single ls I get an error: Received message too long 16777216

    Read the article

  • Varnish, hide port number

    - by George Reith
    My set up is as follows: OS: CentOS 6.2 running on an OpenVZ virtual machine. Web server: Nginx listening on port 8080 Reverse proxy: Varnish listening on port 80 The problem is that Varnish redirects my requests to port 8080 and this appears in the address bar like so http://mysite.com:8080/directory/, causing relative links on the site to include the port number (8080) in the request and thus bypassing Varnish. The site is powered by WordPress. How do I allow Varnish to use Nginx as the backend on port 8080 without appending the port number to the address? Edit: Varnish is set up like so: I have told the Varnish daemon to listen to port 80 by default. VARNISH_VCL_CONF=/etc/varnish/default.vcl # # # Default address and port to bind to # # Blank address means all IPv4 and IPv6 interfaces, otherwise specify # # a host name, an IPv4 dotted quad, or an IPv6 address in brackets. # VARNISH_LISTEN_ADDRESS= VARNISH_LISTEN_PORT=80 # # # Telnet admin interface listen address and port VARNISH_ADMIN_LISTEN_ADDRESS=127.0.0.1 VARNISH_ADMIN_LISTEN_PORT=6082 # # # Shared secret file for admin interface VARNISH_SECRET_FILE=/etc/varnish/secret # # # The minimum number of worker threads to start VARNISH_MIN_THREADS=1 # # # The Maximum number of worker threads to start VARNISH_MAX_THREADS=1000 # # # Idle timeout for worker threads VARNISH_THREAD_TIMEOUT=120 # # # Cache file location VARNISH_STORAGE_FILE=/var/lib/varnish/varnish_storage.bin # # # Cache file size: in bytes, optionally using k / M / G / T suffix, # # or in percentage of available disk space using the % suffix. VARNISH_STORAGE_SIZE=1G # # # Backend storage specification VARNISH_STORAGE="file,${VARNISH_STORAGE_FILE},${VARNISH_STORAGE_SIZE}" # # # Default TTL used when the backend does not specify one VARNISH_TTL=120 The VCL file that Varnish calls (through an include in default.vcl) consists of: backend playwithbits { .host = "127.0.0.1"; .port = "8080"; } acl purge { "127.0.0.1"; } sub vcl_recv { if (req.http.Host ~ "^(.*\.)?playwithbits\.com$") { set req.backend = playwithbits; set req.http.Host = regsub(req.http.Host, ":[0-9]+", ""); if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } return(lookup); } if (req.url ~ "^/$") { unset req.http.cookie; } } } sub vcl_hit { if (req.http.Host ~ "^(.*\.)?playwithbits\.com$") { if (req.request == "PURGE") { set obj.ttl = 0s; error 200 "Purged."; } } } sub vcl_miss { if (req.http.Host ~ "^(.*\.)?playwithbits\.com$") { if (req.request == "PURGE") { error 404 "Not in cache."; } if (!(req.url ~ "wp-(login|admin)")) { unset req.http.cookie; } if (req.url ~ "^/[^?]+.(jpeg|jpg|png|gif|ico|js|css|txt|gz|zip|lzma|bz2|tgz|tbz|html|htm)(\?.|)$") { unset req.http.cookie; set req.url = regsub(req.url, "\?.$", ""); } if (req.url ~ "^/$") { unset req.http.cookie; } } } sub vcl_fetch { if (req.http.Host ~ "^(.*\.)?playwithbits\.com$") { if (req.url ~ "^/$") { unset beresp.http.set-cookie; } if (!(req.url ~ "wp-(login|admin)")) { unset beresp.http.set-cookie; } } }

    Read the article

  • 500 Internal Server Error when setting up Apache on Ubuntu+Django

    - by ApacheQ
    I tried with Apache on ubuntu 9.04 and get the same error: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, webmaster@localhost and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log. And my apache/error.log is: [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] ServerName: 'sapint2' [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] DocumentRoot: '/etc/apache2/htdocs' [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] URI: '/' [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] Location: '/' [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] Directory: None [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] Filename: '/etc/apache2/htdocs' [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] PathInfo: '/' [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] Traceback (most recent call last): [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] File "/usr/lib/python2.7/dist-packages/mod_python/importer.py", line 1537, in HandlerDispatch\n default=default_handler, arg=req, silent=hlist.silent) [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] File "/usr/lib/python2.7/dist-packages/mod_python/importer.py", line 1229, in _process_target\n result = _execute_target(config, req, object, arg) [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] File "/usr/lib/python2.7/dist-packages/mod_python/importer.py", line 1128, in _execute_target\n result = object(arg) [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/modpython.py", line 180, in handler\n return ModPythonHandler()(req) [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/modpython.py", line 142, in call\n self.load_middleware() [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py", line 45, in load_middleware\n mod = import_module(mw_module) [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] File "/usr/local/lib/python2.7/dist-packages/django/utils/importlib.py", line 35, in import_module\n import(name) [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] File "/usr/local/lib/python2.7/dist-packages/django/contrib/sessions/middleware.py", line 4, in \n from django.utils.cache import patch_vary_headers [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] File "/usr/local/lib/python2.7/dist-packages/django/utils/cache.py", line 25, in \n from django.core.cache import get_cache [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] File "/usr/local/lib/python2.7/dist-packages/django/core/cache/init.py", line 187, in \n cache = get_cache(DEFAULT_CACHE_ALIAS) [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] File "/usr/local/lib/python2.7/dist-packages/django/core/cache/init.py", line 179, in get_cache\n cache = backend_cls(location, params) [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] File "/usr/local/lib/python2.7/dist-packages/django/core/cache/backends/memcached.py", line 139, in init\n "Memcached cache backend requires either the 'memcache' or 'cmemcache' library" [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] InvalidCacheBackendError: Memcached cache backend requires either the 'memcache' or 'cmemcache' library [Sat Oct 06 09:51:30 2012] [notice] caught SIGTERM, shutting down [Sat Oct 06 09:51:31 2012] [notice] mod_python: Creating 8 session mutexes based on 150 max processes and 0 max threads. [Sat Oct 06 09:51:31 2012] [notice] mod_python: using mutex_directory /tmp [Sat Oct 06 09:51:31 2012] [notice] Apache/2.2.17 (Ubuntu) PHP/5.3.5-1ubuntu7.11 with Suhosin-Patch mod_python/3.3.1 Python/2.7.1+ mod_wsgi/3.3 configured -- resuming normal operations I need some help Thanks

    Read the article

  • IIS 401.3 - Unauthorized on only 1 server out of 3 set up for network load balancing

    - by Tony
    Over the weekend our Server Admin set up two virtual Windows 2008 machines with IIS installed and set them up under NLB. I came in and changed the application pool the website was running under to our domain account that has proper access to the database and the file share hosting our .NET web application Sitefinity, and changed it to .NET 4 Integrated. NLB and everything was running fine on both servers. He brought up the third server for our cluster on Tuesday and I performed the same actions.. The only difference was that I was given admin rights for the third server so I could set it up remotely instead of going to his office. He has full control over the share and NTFS perms on \\hostname\Sitefinity and I believe I only had read access. I pointed the web site to the same \\hostname\Sitefinity\sitename share that the others were on and the authentication/authorization test settings passed. I hit the site from http://localhost (like I did successfully from the other two before trying the cluster's IP address) and I received a HTTP Error 401.3 - Unauthorized. I've verified many times that the application pool is running under the same service account. I tried hitting just a simple test.htm.. works fine on both of the first two servers but I get the same 401.3 on the third. I copied my dev project to the local inetpub directory and re-pointed the website and that ran perfectly. I turned on Failed Request Tracing and it acts like it's still running the local IUSR account I guess (instead of my domain account)? Here is an excerpt of the File Cache Access Start and the error from the trace: FileName \\hostname\sitefinity\sitename\test.htm UserName IUSR DomainName NT AUTHORITY ---------- Successful false FileFromCache false FileAddedToCache false FileDirmoned true LastModCheckErrorIgnored true ErrorCode 2147942405 LastModifiedTime ErrorCode Access is denied. (0x80070005) ---------- ModuleName IIS Web Core Notification 2 HttpStatus 401 HttpReason Unauthorized HttpSubStatus 3 ErrorCode 2147942405 ConfigExceptionInfo Notification AUTHENTICATE_REQUEST ErrorCode Access is denied. (0x80070005) ---------- My personal AD account was then granted read/write perms to the share so I created a new application pool and set the site under it in case there was an issue with the application pool but no success. I created another under my own account and it still failed. It just seems like maybe it's not trying to access the files under the account my application pools are running under although that's the only way I've done things before. I set the Physicial Path Credentials in Advanced Settings on the site to the service account and it threw a 500 error of some sort so I assume that's not the answer (and I don't have to do it on the other servers). It's like somehow I'm trying to force impersonation on the IUSR account or something?

    Read the article

  • Windows 7 The boot selection failed because a required device is inaccessible 0xc000000f

    - by piratejackus
    I have a problem with my Windows 7, hardware : Acer 3820TG Operating Systems : Windows 7 and Ubuntu 10.04 dual Case: When I try to boot my windows 7 I see an error: "Window failed to start. A recent hardware or software change might be the cause. To fix the problem: 1.Insert.... 2. .... ... status : 0xc000000f info : The boot selection failed because a required device is inaccessible .... " I can't exactly remember what were my last actions on Windows. I already searched this error and applied the proposed solutions, I created a repair USB (because I don't have a CD-ROM nor a Windows 7 CD) such as; -repair operating system :it says it cannot repair it -checking disk (chkdsk D: /f /r) : it checks the disk without a problem or error and it takes pretty long (more than a hour). But when I restart, still the same error. -I didn't create a restore point so I pass this option -I don't have a system image -I tried to run windows recovery (I have a recovery partition) but there are just two options: 1- Format the operating system but retain user data (copies the files under users to c\backup folder, but when I searched deeper I found that there are some people who already tried this option and couldn't find their user files under backup directory). Plus, I have unfortunately just one partition D (it is a fault I know) because I use always Ubuntu. So this is not applicable in my situation 2- Format entire system (Windows). I keep my valuable data in windows but not in user folder. I was reaching them from Windows. -I tried to repair windows boot by: bootrec /fixMBR bootrec /fixBoot bootrec /rebuildBCD I lost all grub menu, and reinstalled it. - ubuntuforums.org/showthread.php?t=1014708&page=29 nothing changed, same error. I created a thread in microsoft forums - http://social.answers.microsoft.com/Forums/en-US/w7install/thread/69517faf-850a-45fd- 8195-6d4ed831f805 but I couldn't find a solution. Before I run chkdsk from usb repair disk I couldn't able to mount Windows (NTFS) partition from Ubuntu, I was getting "couldn't mount file system, error code 2". I tried to fix ntfs partition from ubuntu and got "segmentation fault". I also created a thread on ubuntuforums for this mount problem: - http://ubuntuforums.org/showthread.php?t=1606427 So, after chkdsk, I could enable to mount windows partition but all I see in this partition is chkdsk logs, no any other data. Now, I don't think I lost my data because I don't get any filesystem errors, just the boot section, but this log files under windows partition makes me afraid. I see that Microsoft developers don't have a solution yet for this error. If you need any information to get more idea I can give, maybe I miss some points or it could be complicated. Thanks in advance.

    Read the article

  • xen-create-image does not create inird or initramfs image and domU does not starts with system image

    - by user219372
    I have Fedora 19 as Dom0. To create image I run # xen-create-image --hostname=debian-wheezy --memory=512Mb --dhcp --size=20Gb --swap=512Mb --dir=/xen --arch=amd64 --dist=wheezy After generation finished I start vm and see: # xl create /etc/xen/debian-wheezy.cfg Parsing config from /etc/xen/debian-wheezy.cfg libxl: error: libxl_dom.c:409:libxl__build_pv: xc_dom_ramdisk_file failed: No such file or directory libxl: error: libxl_create.c:919:domcreate_rebuild_done: cannot (re-)build domain: -3 In the /etc/xen/debian-wheezy.cfg i have # # Kernel + memory size # kernel = '/boot/vmlinuz-3.11.2-201.fc19.x86_64' ramdisk = '/boot/initrd.img-3.11.2-201.fc19.x86_64' and ls -1 /boot/*201* shows /boot/config-3.11.2-201.fc19.x86_64 /boot/initramfs-3.11.2-201.fc19.x86_64.img /boot/System.map-3.11.2-201.fc19.x86_64 /boot/vmlinuz-3.11.2-201.fc19.x86_64 Then if I fix ramdisk directive in .cfg file to /boot/initramfs-3.11.2-201.fc19.x86_64.img vm will start but os inside will not boot. In a tail of xl console I get [ OK ] Reached target Basic System. dracut-initqueue[130]: Warning: Could not boot. dracut-initqueue[130]: Warning: /dev/disk/by-uuid/085883ad-73ca-45cc-8bc5-e6249f869b26 does not exist dracut-initqueue[130]: Warning: /dev/fedora/root does not exist dracut-initqueue[130]: Warning: /dev/fedora/swap does not exist dracut-initqueue[130]: Warning: /dev/mapper/fedora-root does not exist dracut-initqueue[130]: Warning: /dev/mapper/fedora-swap does not exist dracut-initqueue[130]: Warning: /dev/xvda2 does not exist Starting Dracut Emergency Shell... Warning: /dev/disk/by-uuid/085883ad-73ca-45cc-8bc5-e6249f869b26 does not exist Warning: /dev/fedora/root does not exist Warning: /dev/fedora/swap does not exist Warning: /dev/mapper/fedora-root does not exist Warning: /dev/mapper/fedora-swap does not exist Warning: /dev/xvda2 does not exist Generating "/run/initramfs/sosreport.txt" Entering emergency mode. Exit the shell to continue. Type "journalctl" to view system logs. You might want to save "/run/initramfs/sosreport.txt" to a USB stick or /boot after mounting them and attach it to a bug report. dracut:/# .img files in /xen/domains/debian-wheezy exists and listed in disk section of debian-wheezy.cfg So what should i do? Update: I've found that xl does not mount images. In debian-wheezy.cfg I have that: root = '/dev/xvda2 ro' disk = [ 'file:/xen/domains/debian-wheezy/disk.img,xvda2,w', 'file:/xen/domains/debian-wheeze/swap.img,xvda1,w', ] And there is no /dev/xvda* or /dev/sda* or /dev/hda* files in VM.

    Read the article

  • "Hostile" network in the company - please comment on a security setup

    - by TomTom
    I have a little specific problem here that I want (need) to solve in a satisfactory way. My company has multiple (IPv4) networks that are controlled by our router sitting in the middle. Typical smaller shop setup. There is now one additional network that has an IP Range OUTSIDE of our control, connected to the internet with another router OUTSIDE of our control. Call it a project network that is part of another companies network and combined via VPN they set up. This means: They control the router that is used for this network and They can reconfigure things so that they can access the machines in this network. The network is physically split on our end through some VLAN capable switches as it covers three locations. At one end there is the router the other company controls. I Need / want to give the machines used in this network access to my company network. In fact, it may be good to make them part of my active directory domain. The people working on those machines are part of my company. BUT - I need to do so without compromising the security of my company network from outside influence. Any sort of router integration using the externally controlled router is out by this idea So, my idea is this: We accept the IPv4 address space and network topology in this network is not under our control. We seek alternatives to integrate those machines into our company network. The 2 concepts I came up with are: Use some sort of VPN - have the machines log into VPN. Thanks to them using modern windows, this could be transparent DirectAccess. This essentially treats the other IP space not different than any restaurant network a laptop of the company goes in. Alternatively - establish IPv6 routing to this ethernet segment. But - and this is a trick - block all IPv6 packets in the switch before they hit the third party controlled router, so that even IF they turn on IPv6 on that thing (not used now, but they could do it) they would get not a single packet. The switch can nicely do that by pulling all IPv6 traffic coming to that port into a separate VLAN (based on ethernet protocol type). Anyone sees a problem with using he switch to isolate the outer from IPv6? Any security hole? It is sad we have to treat this network as hostile - would be a lot easier - but the support personnel there is of "known dubious quality" and the legal side is clear - we can not fulfill our obligations when we integrate them into our company while they are under a jurisdiction we don't have a say in.

    Read the article

  • Reproducible file corruption for files on windows share

    - by bbuser
    We have about 40 file servers in our intranet to distribute software packages. The servers have names like example01, example02 etc. Every name resolves to a single IP-address (A-record) and the IP resolves back to that name (PTR) for every single server. The thing is, that for a certain file (mypackage.cab) I get different results depending on whether I use: \\192.0.2.01\fs\pkg\X12345678 or \\example01.foo\fs\pkg\X12345678 While in one case the file is correct in the other case the file has exactly the right size, but it is all zeros. For a certain combination of client and server I can reproduce this reliably. It doesn´t matter if I download in Windows Explorer, via robocopy or even from Linux with smbclient. It´s always the same, one file corrupt, the other ok. It happens only for certain combinations of clients and servers, not others. For example: client01 example01.foo -> OK (192.0.2.01 is also OK) client01 example02.foo -> broken (but 192.0.2.02 is OK) client02 example01.foo -> broken (but 192.0.2.01 is OK) client02 example02.foo -> OK (192.0.2.02 is also OK) client03 example06.foo -> OK (but 192.0.2.06 is broken) client03 example07.foo -> OK (192.0.2.07 is also OK) etc... In some cases I get the broken file when I use the IP address in other cases when I use the name. For every client the majority of servers is Ok, but from every client I tested I have at least 4 cases of broken files. All this happens only for mypackage.cab (about 5k in size), it never happened for any of the other files in the same directory. Confused? Certainly I am. Any idea what can cause this or any idea what to try to figure it out is welcome. Clients are Windows XP. Servers are NetApp filers I don´t have access to. I can (and will) contact the filer team again, but first I have to have an idea what is going on.

    Read the article

  • Installing the Newest KeePass for MacOSX from Source

    - by firebush
    I've transitioned our passwords to KeePass. LastPass looks cool, but I prefer a system where we control the database locally rather than it being kept in the cloud. I have a windows and Linux system and both are able to access our KeePass database easily. On my Linux system (Ubuntu), I simply installed KeePass via synaptic and it just worked. So everything was working great, until my wife tried to set up things on her MacBook to access the database. Huge problems. It was so easy on Linux that I didn't expect there to be issues there. In case it's helpful, she's running a fresh install of Mac OSX 10.5.8, Leopard. We simply went to the download site for KeePass: http://keepass.info/download.html Clicked on the link titled KeePass 2.x for Mac OS X from which we retrieved Mono 2.10.5 and KeePass 2.18 from that site (the packages posted there at the time of this writing). Mono installed without problems (at least, none that we saw). She opened the KeePass image and dragged it to the Application side, unpackaging it there. According to the instructions on the KeePass installation instructions, she opened a terminal, changed to the directory in /Applications containing KeePass.exe, and ran it through mono: mono KeePass.exe No application opens at all - we see a blip for it, but then it immediately goes away, indicating to us that it is crashing. Also disconcerting, I see that people are throwing fits about copy-and-paste not working for KeePass 2.18 on MacOSX. Judging from the 2.19 addresses the copy/paste issue. I'm hoping that version will solve all our issues. So here's my question: how can I try out 2.19 on her system. It doesn't seem to be packaged like the 2.18 one is. But we're not scared of building it. I see that the source for 2.19 is here (at the bottom of the page). Can I just download that to her machine somewhere and run something to build it? I'm familiar with automake but not with building .NET source, so please answer gently if this is stupidly easy. :^) btw: tomorrow's my wife's birthday, and this is getting her down. If you know how to navigate these issues, it would be a nice birthday gift for her. Thanks in advance! Update I'll post this since it might be helpful to someone else: I got KeePass 2.18 to run by updating Mono to 2.10.9 (rather than the 2.10.5 given by the site above). With the later version of Mono, it runs without crashing. And yet, I do see the copy and paste issue that others see. I can open a database on her machine, but the incorrect data get's copied. So again, can someone help me install KeePass 2.19? Thanks!

    Read the article

  • Microsoft signed drivers appears as publisher not verfied

    - by Priyanka Gupta
    Task at hand: Microsoft sign drivers on Win 7. I microsoft signed my driver package 3 times every time thinking I might have missed a step or something. However, I cannot seem to get rid of the Windows Security error message "Windows can't verify the publisher of this driver software'. This is not the first time I have signed the driver packages. I was successfully able to sign other driver packages a few months ago. However, with this driver package I keep getting Windows security dialog box. Here's the procedure I follow - Create a new cat file using INF2CAT tool. Self sign the driver using a Versign Class 3 Public Primary Certification Authority - G5.cer. Run the microsoft tests on DTM Servers and clients with the devices that use this driver. Create WLK submission package. Self sign the cab file. Submit the package for certification. The catalog file that comes back after successfully passing tests says Name of signer "Microsoft Windows Hardware Comptibility Publisher". When I check the validity of signature using SignTool, it says the signature is vaild. However, when I try to install the driver with new signed catalog file the windows complain. Any ideas? Edit 11/12/2012: Reply to Eugene's comment Thanks for the help, Eugene. Yes. I did sign two other driver packages before. One of them was modified version of WinUSB driver. I am using the same certificate I used when I signed those two driver packages a few months ago. It costs $250 per signing from Microsoft. I would think that Microsoft would complain about it during certification if the certificate is wrong. I use the following command to self sign the CAT file. I don't have to specify the ceritificate name as there's only one certificate in the directory - Signtool sign /v /a /n CompanyName /t http://timestamp.verisign.com/scripts/timestamp.dll OurCatalogFile.cat Below is the result from running Verify command on the Microsoft signed OutCatalogFile.cat C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\x64signtool verify /v "C:\User s\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Verifying: C:\Users\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Hash of file (sha1): BDDF39B1DD95881B462164129758A7FFD54F47D9 Signing Certificate Chain: Issued to: Microsoft Root Certificate Authority Issued by: Microsoft Root Certificate Authority Expires: Sun May 09 18:28:13 2021 SHA1 hash: CDD4EEAE6000AC7F40C3802C171E30148030C072 Issued to: Microsoft Windows Hardware Compatibility PCA Issued by: Microsoft Root Certificate Authority Expires: Thu Jun 04 16:15:46 2020 SHA1 hash: 8D42419D8B21E5CF9C3204D0060B19312B96EB78 Issued to: Microsoft Windows Hardware Compatibility Publisher Issued by: Microsoft Windows Hardware Compatibility PCA Expires: Wed Sep 18 18:20:55 2013 SHA1 hash: D94345C032D23404231DD3902F22AB1C2100341E The signature is timestamped: Tue Nov 06 11:26:48 2012 Timestamp Verified by: Issued to: Microsoft Root Authority Issued by: Microsoft Root Authority Expires: Thu Dec 31 02:00:00 2020 SHA1 hash: A43489159A520F0D93D032CCAF37E7FE20A8B419 Issued to: Microsoft Timestamping PCA Issued by: Microsoft Root Authority Expires: Sun Sep 15 02:00:00 2019 SHA1 hash: 3EA99A60058275E0ED83B892A909449F8C33B245 Issued to: Microsoft Time-Stamp Service Issued by: Microsoft Timestamping PCA Expires: Tue Apr 09 16:53:56 2013 SHA1 hash: 1895C2C907E0D7E5C0292B92C6EA8D0E236F525E Successfully verified: C:\Users\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Number of files successfully Verified: 1 Number of warnings: 0 Number of errors: 0 Thank you!

    Read the article

  • Workaround with paths in vista

    - by Argiropoulos Stavros
    Not really a programming question but i think quite relative. I have a .exe running in my windows XP PC. This exe needs a file in the same directory to run and has no problem finding it in XP BUT in Vista(I tried this in several machines and works in some of them) fails to run. I'm guessing there is a problem finding the path.The program is written in basic(Yes i know..) I attach the code below. Can you think of any workarounds? Thank you The exe is located in c:\tools Also the program runs in windows console(It starts but then during the execution cannot find a custom file type .TOP made by the creator of the program) ' PROGRAMM TOP11.BAS DEFDBL A-Z CLS LOCATE 1, 1 COLOR 14, 1 FOR i = 1 TO 80 PRINT "±"; NEXT i LOCATE 1, 35: PRINT "?? TOP11 ??" PRINT " €€‚—‚„‘ ’— ‹„’†‘„— ‘’† „”€„€ ’†‘ ‡€€‘‘†‘ ‰€ † „.‚.‘.€. " COLOR 7, 0 PRINT "-------------------------------------------------------------------------------" PRINT INPUT "ƒ?©« «¦¤ ©¬¤«?©«? ¤???... : ", Factor# INPUT "¤¦£ ¨®e¦¬ [.TOP] : ", topfile$ VIEW PRINT 7 TO 25 file1$ = topfile$ + ".TOP" file2$ = topfile$ + ".T_P" file3$ = "Syntel" OPEN file3$ FOR OUTPUT AS #3 PRINT #3, " ‘¬¤«?©«?? ¤??? = " + STR$(Factor#) + " †‹„‹†€: " + DATE$ CLOSE #3 command1$ = "copy" + " " + file1$ + " " + file2$ SHELL command1$ '’¦ ¨®e¦ .TOP ¤« ¨a­«  £ «¤ ?«a?¥ .T_P OPEN file2$ FOR INPUT AS #1 OPEN file1$ FOR OUTPUT AS #2 bb$ = " \\\ \ , ###.#### ###.#### ####.### ##.### " DO LINE INPUT #1, Line$ Line$ = RTRIM$(LTRIM$(Line$)) icode$ = LEFT$(Line$, 1) IF icode$ = "1" THEN Line$ = " " + Line$ PRINT #2, Line$ PRINT Line$ ELSEIF icode$ = "2" THEN Line$ = " " + Line$ PRINT #2, Line$ PRINT Line$ ELSEIF icode$ = "3" THEN Number$ = MID$(Line$, 3, 6) Hangle = VAL(MID$(Line$, 14, 9)) Zangle = VAL(MID$(Line$, 25, 9)) Distance = VAL(MID$(Line$, 36, 9)) Distance = Distance * Factor# Height = VAL(MID$(Line$, 48, 6)) PRINT #2, USING bb$; icode$; Number$; Hangle; Zangle; Distance; Height PRINT USING bb$; icode$; Number$; Hangle; Zangle; Distance; Height ELSE END IF LOOP UNTIL EOF(1) VIEW PRINT CLS LOCATE 1, 1 PRINT " *** ’„‘ ’“ ‚€‹‹€’‘ *** " END

    Read the article

  • Vagrant - Failed login, ssh not set up

    - by motleydev
    This question is two fold because somewhere in my attempts to solve the problem, I created a new one. First: I was trying to vagrant up using a Vagranfile based on the standard hashicorp/precise32 box. Everything worked up until default: SSH auth method: private key where it would eventually time out. Enabling gui in the Vagrantfile showed that the machine never actually logs in. I can use the standard user/pass and log in from that point but the vagrant up process still remains at that prior status. Here's where my understanding might be a little dim. I've tried setting auth method from the insecure_pass_key to my root ~/.ssh/id_rsa or whichever one I wanted to use. I'm not entirely sure where to put a copy of the public key or my authorized keys file. I've got a .vagrant.d folder in my user dir (I'm on OSX) which seems to contain the box images. I've got a .vagrant folder in my directory with the Vagrantfile which seems to contain the specific machine I am building off of. I've tried pouring over the docs and forums but I seem to be missing a key concept here. And Now: After a host of tips/tricks such as rolling back by VirtualBox install, uninstalling/reinstalling Vagrant and VritualBox several times, when I try to run vagrant ssh-config with a Vagrantfile based on the same hashicorp/precise32 it says that the box is not enabled for SSH. Specifically, this error: The provider for this Vagrant-managed machine is reporting that it is not yet ready for SSH. Depending on your provider this can carry different meanings. Make sure your machine is created and running and try again. Additionally, check the output of `vagrant status` to verify that the machine is in the state that you expect. If you continue to get this error message, please view the documentation for the provider you're using. So now I am slightly up a creek. Any help would be appreciated if not just clarifying a concept. Some pertinent info: I'm on OSX Maverics Due to the fact I'm running a dual HD system with system files on one HD and user files on another, my permissions are a little wonky and VBoxManage will only let me run commands via Sudo - not sure if it's pertinent - but maybe. I have no idea what I'm doing. That part is perhaps more important.

    Read the article

  • Dovecot Virtual Users Not Authenticating

    - by blankabout
    We have a standard Postfix/Dovecot installation working perfectly with real users but cannot work out how to add virtual users, all virtual user login attempts fail with authentication errors. Following are snippets from the configuration files: /etc/postfix/main.cf: virtual_mailbox_domains = virtualexample.com virtual_mailbox_base = /var/spool/vhosts virtual_mailbox_recipients = hash:/etc/postfix/virtual_mailbox_recipients /etc/dovecot/dovecot.conf: !include conf.d/*.conf /etc/dovecot/conf.d/10-auth.conf auth_mechanisms = cram-md5 digest-md5 plain passdb { driver = passwd-file # Path for passwd-file. Also set the default password scheme. args = scheme=cram-md5 /etc/cram-md5.pwd } /etc/cram-md5.pwd [email protected]{MD5}$1$uIMvzy92$9Xt67B/qw4u6txkkxzne80 This is a snippet from the log when a login attempt is made: auth: Debug: Loading modules from directory: /usr/lib64/dovecot/auth auth: Debug: Module loaded: /usr/lib64/dovecot/auth/libauthdb_ldap.so auth: Debug: Module loaded: /usr/lib64/dovecot/auth/libdriver_sqlite.so auth: Debug: Module loaded: /usr/lib64/dovecot/auth/libmech_gssapi.so auth: Debug: passwd-file /etc/cram-md5.pwd: Read 1 users auth: Debug: auth client connected (pid=21990) auth: Debug: client in: AUTH#0111#011CRAM-MD5#011service=imap#011lip=1.1.1.1#011rip=2.2.2.2#011lport=143#011rport=51774 auth: Debug: client out: CONT#0111#011PDI1Njc0NjQ1NzQ3MTY0NTkuMTM0MTIxNzkwN0BncDM+ auth: Debug: client in: CONT auth: Debug: passwd-file([email protected],2.2.2.2): lookup: [email protected] file=/etc/cram-md5.pwd auth: Debug: client out: OK#0111#[email protected] auth: Debug: master in: REQUEST#0111630404609#01121990#0111#011b66b5f46b520a08e1d19d3d249be7073 auth: Debug: passwd([email protected],2.2.2.2): lookup auth: passwd([email protected],2.2.2.2): unknown user auth: Error: userdb([email protected],2.2.2.2): user not found from userdb passwd auth: Debug: master out: NOTFOUND#0111630404609 imap: Error: Authenticated user not found from userdb, auth lookup id=1630404609 (client-pid=21990 client-id=1) imap-login: Internal login failure (pid=21990 id=1) (auth failed, 1 attempts): user=, method=CRAM-MD5, rip=2.2.2.2, lip=1.1.1.1, mpid=21993 auth: Debug: auth client connected (pid=22010) auth: Debug: client in: AUTH#0111#011CRAM-MD5#011service=imap#011lip=1.1.1.1#011rip=2.2.2.2#011lport=143#011rport=51775 auth: Debug: client out: CONT#0111#011PDcxMDkwNDY1NTQzODUzMDkuMTM0MTIxNzkyOEBncDM+ auth: Debug: client in: CONT auth: Debug: passwd-file([email protected],2.2.2.2): lookup: [email protected] file=/etc/cram-md5.pwd auth: Debug: client out: OK#0111#[email protected] auth: Debug: master in: REQUEST#011343539713#01122010#0111#011e47b1345784e2845d59e794afa9a6bbe auth: Debug: passwd([email protected],2.2.2.2): lookup auth: passwd([email protected],2.2.2.2): unknown user auth: Error: userdb([email protected],2.2.2.2): user not found from userdb passwd auth: Debug: master out: NOTFOUND#011343539713 imap: Error: Authenticated user not found from userdb, auth lookup id=343539713 (client-pid=22010 client-id=1) imap-login: Internal login failure (pid=22010 id=1) (auth failed, 1 attempts): user=, method=CRAM-MD5, rip=2.2.2.2, lip=1.1.1.1, mpid=22011 It would appear that the user lookup is not working, even tho' the log suggests that Dovecot is using the /etc/cram-md5.pwd file and the user is configured in that same file. There are of course dozens of examples of using virtual users with Dovecot, but all the ones we have found either refer to Dovecot 1.x (we are using 2.x), using only virtual users (we must use real AND virtual users) or want to use a MySQL db, we need to use a text file. Some hints about where we are going wrong would be very much appreciated.

    Read the article

  • How could I let Skydrive desktop sync to MicroSD in Windows 8 tablet?

    - by peSHIr
    I have a Samsung Slate 7 tablet with (now) Windows 8 on it. This machine has a 64 Gb SSD and I have a 64 Gb MicroSD card in it. I also have a Skydrive on my main Microsoft ID that contains about 45 Gb of content. With Windows and some development stuff installed, my Skydrive will not fit on the main drive of the tablet. (Besides, my idea was to keep data on the memory card anyway, to make it easier to repave the machine without data loss if need be.) My problem should now be clear: I want to install the Skydrive desktop app to sync my Skydrive to the MicroSD card. This is not possible, as Skydrive does not allow syncing files to removable drives. I have tried a number of things already, but none of them worked: Use the mklink command line tool to create a directory link/junction from a folder name on SSD to a folder on the MicroSD and then try to install Skydrive sync to the SSD link folder. Skydrive however still recognizes this as something it does not want to sync onto. The various different filter drivers mentioned on Agnipulse (including the Hitachi one) that should make windows see some or all of the removable drives in the system as fixed drives do not seem work on (64-bit) Windows 8: they either can't be installed, do nothing and/or cause Windows 8 to go into Automatic Repair mode when rebooting. The Lexar BootIt app seems to be meant to flip the relevant bit in the on-board drive controller of supported USB pen drives, but I tried it anyway. Of course it did nothing to how the MicroSD card was seen. I have now run out of ideas, it seems, and I was wondering if anyone here has a solution to let Windows 8 see the MicroSD memory card in my tablet as a fixed drive instead of removable drive, or some other way of getting the Skydrive desktop to sync my Skydrive data to that MicroSD card. And to be complete: this is not a duplicate question of this or this as those ask about getting USB drives multiple partitions to work on Windows XP. This question is specific about getting desktop Skydrive to sync to MicroSD card in Windows 8, which seems to be a question I have not seen on superuser so far.

    Read the article

  • Proper Imaging Procedures to Restore and Deploy Image with Separate System Reserved Partition

    - by alharaka
    UPDATE: As per my experience here, no one responded. If I do not hear back from TechNet forum members about it, I will post a bounty here, if it makes a difference. I have banged my head against a wall for what seems like all week. I am going to explain my simple procedure, and how none of it, absolutely none, seems to work afterword despite few alternatives and everyone on the internet telling assuming this is how to do it. Diskpart Commands to Create FS Structure REM Select the disk targeted for deployment. REM REM NOTE: Usually disk 0, but drive failure can make it external USB REM media. This will erase the drive regardless! select disk 0 REM Remove previous formatting. clean REM Create System Reserved partition bootloader and files. create partition primary size=100 REM Format the volume format fs=ntfs label="System Reserved" quick override noerr REM Assign the System Reserved partition the D: mount for now assign letter=C REM The main system partition, size not specified to occupy whole drive. create partition primary REM Format the volume format fs=ntfs quick override noerr REM Assign the OS partition the D: mount for now assign letter=D REM Make this the active/bootable partition. sel disk 0 sel partition 1 active REM Close out the diskpart session. exit Now, I thought this was madness, but it turns out the System Reserved partition and standard "System Partition" (C:, commonly both the boot and system volumes where you find the Windows directory AND the bootmgr/ntldr hardware files, this is where Windows 7 diverges) as mounted in the Windows PE session where I run these commands do not matter. See reference here. Since this needs to be BitLocker-ready, enter this crappy System Reserved partition that is separate 100MB of awesome that goes before the regular boot volume. I do this, then I proceed to the next step. Deploy System Reserved and Normal System Images REM C is still the "System Reserved Partition", and the image is just like it sounds. imagex /apply G:\images\systemreserved.wim 1 C: REM D is now what will be the C: system partition on reboot, supposedly. imagex /apply G:\images\testimage.wim 1 D: Reboot the system Now, the images I just captured should look good. This is not even sysprepped, but reapplying the same fscking image I prepared on the same reference workstation hours before. Problem is I get 0xc000000e could not detect the accessible boot device \Windows\system32\winload.exe or different kinds of nonsense revolving around being able to find the boot volume with all the right files. I try different variations of things, now none of them work. I tried repairs with bcdboot, with a fresh System Reserved partition or not, bootrec, and maually editing the damn BCD store with bcdedit. I tried finalizing the above process with and without bootsect /nt60 C: /force. I need to wrap up and automate this procedure. What am I doing wrong that does not make the image happy, but really just miserable.

    Read the article

  • FreeBSD jail with IPFW with loopback - unable to connect loopback interface

    - by khinester
    I am trying to configure a one IP jail with loopback interface, but I am unsure how to configure the IPFW rules to allow traffic to pass between the jail and the network card on the server. I have followed http://blog.burghardt.pl/2009/01/multiple-freebsd-jails-sharing-one-ip-address/ and https://forums.freebsd.org/viewtopic.php?&t=30063 but without success, here is what i have in my ipfw.rules # vim /usr/local/etc/ipfw.rules ext_if="igb0" jail_if="lo666" IP_PUB="192.168.0.2" IP_JAIL_WWW="10.6.6.6" NET_JAIL="10.6.6.0/24" IPF="ipfw -q add" ipfw -q -f flush #loopback $IPF 10 allow all from any to any via lo0 $IPF 20 deny all from any to 127.0.0.0/8 $IPF 30 deny all from 127.0.0.0/8 to any $IPF 40 deny tcp from any to any frag # statefull $IPF 50 check-state $IPF 60 allow tcp from any to any established $IPF 70 allow all from any to any out keep-state $IPF 80 allow icmp from any to any # open port ftp (20,21), ssh (22), mail (25) # ssh (22), , dns (53) etc $IPF 120 allow tcp from any to any 21 out $IPF 130 allow tcp from any to any 22 in $IPF 140 allow tcp from any to any 22 out $IPF 150 allow tcp from any to any 25 in $IPF 160 allow tcp from any to any 25 out $IPF 170 allow udp from any to any 53 in $IPF 175 allow tcp from any to any 53 in $IPF 180 allow udp from any to any 53 out $IPF 185 allow tcp from any to any 53 out # HTTP $IPF 300 skipto 63000 tcp from any to me http,https setup keep-state $IPF 300 skipto 63000 tcp from any to me http,https setup keep-state # deny and log everything $IPF 500 deny log all from any to any # NAT $IPF 63000 divert natd ip from any to any via $jail_if out $IPF 63000 divert natd ip from any to any via $jail_if in but when i create a jail as: # ezjail-admin create -f continental -c zfs node 10.6.6.7 /usr/jails/node/. /usr/jails/node/./etc /usr/jails/node/./etc/resolv.conf /usr/jails/node/./etc/ezjail.flavour.continental /usr/jails/node/./etc/rc.d /usr/jails/node/./etc/rc.conf 4 blocks find: /usr/jails/node/pkg/: No such file or directory Warning: IP 10.6.6.7 not configured on a local interface. Warning: Some services already seem to be listening on all IP, (including 10.6.6.7) This may cause some confusion, here they are: root syslogd 1203 6 udp6 *:514 *:* root syslogd 1203 7 udp4 *:514 *:* i get these warning and then when i go into the jail environment, i am unable to install any ports. any advice much appreciated.

    Read the article

  • multiple puppet masters set up using inventory

    - by Oli
    I have managed to set up multiple puppet masters with one puppet master acting as a CA and clients are able to get a certificate from this CA server but use their designated puppet master to get their manifests. See this question for more info.. multiple puppet masters. However, there are a couple of things I have had to do to get this working correctly and have an error which I'll get to. First of all, to get inventory working for a puppet-client (PC) connecting to its designated puppet-master (PM), I had to copy the CA certs on PM1 to the PM2 ca directory. I ran this command: scp [email protected]:/var/lib/puppet/ssl/ca/* [email protected]:/var/lib/puppet/ssl/ca/. Once i have done that, I was able to uncomment the SSLCertificateChainFile, SSLCACertificateFile & SSLCARevocationFile section of my rack.conf VH file on the PM2. Once I had done this, inventory started to work. Does this sound an acceptable way to do things? Secondly, in the puppet.conf file, I am setting the designated PM server for that client. Unless there is a better way, this is how it'll work in my production setup. So PC1 will talk to PM1 and PC2 will talk to PM2. This is where I have an error. When PC2 first requests a cert from the CA on PM1, the cert appears and then I sign the cert on the CA on PM1. When I then do a puppet agent --test on PC2 (which has server = PM2 in puppet.conf), I get this error: Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: puppet-master2.test.net(10.1.1.161) access to /certificate_revocation_list/ca [find] at :112 However, if I change the PC2 puppet.conf file and specify server = PM1 and the rerun puppet agent --test, i do not get any errors. I can then revert the change in the puppet.conf file back to server = PM2 and everything seems to run normally. Do I have to set up some kind of ProxyPassMatch on PM2 for requests made from clients to /certificate_revocation_list/* and redirect them to PM1? Or how can I fix this error? Cheers, Oli

    Read the article

< Previous Page | 676 677 678 679 680 681 682 683 684 685 686 687  | Next Page >