Search Results

Search found 22521 results on 901 pages for 'script fu'.

Page 855/901 | < Previous Page | 851 852 853 854 855 856 857 858 859 860 861 862  | Next Page >

  • Run custom javascript when page loads

    - by Husain Dalal
    Ran into a neat way to load and run custom javascript when an ADF page loads:         <af:resource type="javascript">         function onLoad() {       alert("I am running ! ");           }           //Script block           if (window.addEventListener) {             window.addEventListener("load", onLoad, false)           } else if (window.attachEvent) {              window.detachEvent("onload", onLoad)              window.attachEvent("onload", onLoad)           } else {             window.onload=onLoad           }         </af:resource>  Reference: http://docs.oracle.com/cd/E23943_01/webcenter.1111/e10148/jpsdg_pagelet.htm#BABGHCBF 

    Read the article

  • Jquery mobile and Google maps [on hold]

    - by Jack
    I have been trying to get my google maps to display within a page of a mobile app. The map will display for a second, and then disappear. I have read about a jquery bug, but i can't seem to find a way to get this code to work. any help would be greatly appreciated. <script> var geocoder; var currentLocation; var searchResults; var map; var directionsDisplay; var directionsService; function init(){ geocoder = new google.maps.Geocoder(); if (navigator.geolocation){ navigator.geolocation.watchPosition(showLocation, locationError); } else { alert("Geolocation not supported on this device"); return; } }//init function function showLocation(location){//start showlocation currentLocation = new google.maps.LatLng(location.coords.latitude, location.coords.longitude); $('#lat').attr("value", currentLocation.lat()); $('#lng').attr("value", currentLocation.lng()); geocoder = new google.maps.Geocoder(); geocoder.geocode({'latLng': currentLocation}, function(results, status){ if (status == google.maps.GeocoderStatus.OK){ if (results[0]){ var address = results[0].formatted_address; $('#loc').html(results[0].formatted_address); var info = "Latitude: " + location.coords.latitude + " Longitude: " + location.coords.longitude + "<br />"; info += "Location accurate within " + location.coords.accuracy + " meters <br /> Last Update: " + new Date(location.timestamp).toLocaleString(); $('#acc').html(info); $('#address').attr("value", results[0].formatted_address); }else{ alert('No results found'); }//end else //if(!map) initMap(); }else { $('#loc').html('Geocoder failed due to: ' + status); }//end else });//end of function if (!map) initMap(); }//end showlocation function function locationError(error){ switch(error.code) { case error.PERMISSION_DENIED: alert("Geolocation access denied or disabled. To enable geolocation on your iPhone, go to Settings > General> Location Services"); break; case error.POSITION_UNAVAILABLE: alert("Current location not available"); break; case error.TIMEOUT: alert("Timeout"); break; default: alert("unkown error"); break; }//endswitch }//endlocationerror function initMap(){ var mapOptions = { zoom: 14, mapTypeId: google.maps.MapTypeId.ROADMAP, center: currentLocation };//var mapOptions map = new google.maps.Map(document.getElementById('mapDiv'), mapOptions); google.maps.event.trigger(map, 'resize'); var bounds = new google.maps.LatLngBounds(); bounds.extend(currentLocation); map.fitBounds(bounds); //new code //var center; //function calculateCenter(){ //center = map.getCenter(); //} //google.maps.even.addDomListener(map, 'idle', function(){ //calculateCenter(); //}); //google.maps.even.addListenerOnce(map, 'idle', function(){ //google.maps.even.trigger(map,'resize'); //}); //google.maps.event.addDomListener(window, 'resize', function() { //map.setCenter(center); //});//end new code }//end initMap() //------------------------------------------------------------------------------- $(document).on("pageinit", init);

    Read the article

  • Best way to process a queue in C# (PDF treatment)

    - by Bartdude
    First of all let me expose what I would like to do : I already dispose of a long-time running webapp developed in ASP.NET (C#) 2.0. In this app, users can upload standard PDF files (text+pics). The app is running in production on a Windows Server 2003 and has a dedicated database server (SQL server 2008) also running Windows Server 2003. I myself am a quite experienced web developer, but never actually programmed anything non-web (or at least nothing serious). I plan on adding a functionality to the webapp for which I would need a jpg snapshot of each page of the PDF. Creating these "thumbnails" isn't the big deal as such, I already do it inside my webapp using ghostscript. I've only done it on 1 page documents for now though, and the new functionality will need to process bigger documents. In order for this process to be transparent aswell for the admins as the final users, I would like to implement some kind of queue to delay the processing of the thumbnails. There again, no problem to create the queue, it will consist of records in a table, with enough info to find the pdf document back. Then I will need to process this queue, and that's were my interrogations start. Obviously the best solution to process it isn't an ASP script or so, so I will have to get out of my known environment. No problem, but I have no idea which direction to go. Therefore, a few questions : What should I develop ? I presumably need something that is "standby" on the server, runs when needed, then returns to idle state until further notice.Should I be looking into Windows service ? Is there another more appropriate type of project ? Depending on the first answer, what will be the approach ? Should I have somehow SQL server "tell" the program/service/... to process the queue, or should I have that program/service/... periodically check the state of the queue and treat new items. In both case, which functionality can I use ? we're not talking about hundreds of PDF a day (max 50 maybe), I can totally afford to treat the queue 1 item at a time. Can you confirm I don't have to look much further on threads and so ? (I found a lot of answers talking about threads in queue treatment, but it looks quite overkill for my needs) Maybe linked to the previous question : what about concurrent call to the program, whatever it is ? Let's suppose it is currently running, and a new record comes in the queue, what should be the behaviour ? I don't need much detailed answers and would already be happy with answers like "You can do the processing with a service, and yes it's possible to have sqlserver on machine A trigger a service start on machine B" or "You have to develop xxx and then use the scheduler to run it every xxx minutes". I don't mind reading articles and so, but I can hardly afford to spend too much time learning stuff to finally realize I went the wrong way for this project, so basically I'm trying to narrow down the scope of matters I need to investigate. Thanks for reading me, I hope I'll find some helping hands on here :-)

    Read the article

  • Query total page count via SNMP HP Laserjet

    - by Tim
    I was asked to get hold of the total pages counts for the 100+ printers we have at work. All of them are HP Laser or Business Jets of some description and the vast majority are connected via some form of HP JetDirect network card/switch. After many hours of typing in IP addresses and copying and pasting the relevant figure in to Excel I have now been asked to do this on a weekly basis. This led me to think there must be an easier way, as an IT professional I can surely work out some time saving method to solve this issue. Suffice it to say I do not feel very professional now after a day or so of trying to make SNMP work for me! From what I understand the first thing is to enable SNMP on the printer. Done. Next I would need something to query the SNMP bit. I decided to go open source and free and someone here recommended net-snmp as a decent tool (I would like to have just added the printers as nodes in SolarWinds but we are somewhat tight on licences apparently). Next I need the name of the MIB. For this I believe the HP-LASERJET-COMMON-MIB has the correct information in it. Downloaded this and added to net-snmp. Now I need the OID which I believe after much scouring is printed-media-simplex-count (we have no duplex printers, that we are interested in at least). Running the following command yields the following demoralising output: snmpget -v 2c -c public 10.168.5.1 HP-LASERJET-COMMON-MIB:.1.3.6.1.2.1.1.16.1.1.1 (the OID was derived from running: snmptranslate -IR -On printed-media-simplex-count Unlinked OID in HP-LASERJET-COMMON-MIB: hp ::= { enterprises 11 } Undefined identifier: enterprises near line 3 of C:/usr/share/snmp/mibs/HP-LASER JET-COMMON-MIB..txt .1.3.6.1.2.1.1.16.1.1.1 ) Unlinked OID in HP-LASERJET-COMMON-MIB: hp ::= { enterprises 11 } Undefined identifier: enterprises near line 3 of C:/usr/share/snmp/mibs/HP-LASER JET-COMMON-MIB..txt HP-LASERJET-COMMON-MIB:.1.3.6.1.2.1.1.16.1.1.1: Am I barking up the wrong tree completely with this? My aim was to script it all to output to a file for all the IP addresses of the printers and then plonk that in Excel for my lords and masters to digest at their leisure. I have a feeling I am using either the wrong MIB or the wrong OID from said MIB (or both). Does anyone have any pointers on this for me? Or should I give up and go back to navigationg each printers web page individually (hoping not).

    Read the article

  • L2TP iptables port forward

    - by The_cobra666
    Hi all, I'm setting up port forwarding for an L2TP VPN connection to the local Windows 2003 VPN server. The router is a simpel Debian machine with iptables. The VPN server works perfect. But I cannot log in from the WAN. I'm missing something. The VPN server is using a pre-shared key (L2TP) and give's out an IP in the range: 192.168.3.0. Local network range is 192.168.2.0/24 I added the route: with route add -net 192.168.3.0 netmask 255.255.255.240 gw 192.168.2.13 (the vpn server) iptables -t nat -A PREROUTING -p udp --dport 1701 -i eth0 -j DNAT --to 192.168.2.13 iptables -A FORWARD -p udp --dport 1701 -j ACCEPT iptables -t nat -A PREROUTING -p udp --dport 500 -i eth0 -j DNAT --to 192.168.2.13 iptables -A FORWARD -p udp --dport 500 -j ACCEPT iptables -t nat -A PREROUTING -p udp --dport 4500 -i eth0 -j DNAT --to 192.168.2.13 iptables -A FORWARD -p udp --dport 4500 -j ACCEPT iptables -t nat -A PREROUTING -p 50 -j DNAT --to 192.168.2.13 iptables -A FORWARD -p 50 -j ACCEPT iptables -t nat -A PREROUTING -p 51 -j DNAT --to 192.168.2.13 iptables -A FORWARD -p 51 -j ACCEPT The whole iptables script is (without the line's from above): echo 1 > /proc/sys/net/ipv4/ip_forward echo 1 > /proc/sys/net/ipv4/tcp_syncookies #Flush table's iptables -F INPUT iptables -F OUTPUT iptables -F FORWARD iptables -t nat -F #Drop traffic iptables -P INPUT DROP iptables -P FORWARD DROP iptables -P OUTPUT ACCEPT #verkeer naar buiten toe laten en nat aanzetten iptables -A FORWARD -i eth0 -o eth1 -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE #RDP forward voor windows servers iptables -t nat -A PREROUTING -p tcp --dport 3389 -i eth0 -j DNAT --to 192.168.2.10:3389 iptables -A FORWARD -p tcp --dport 3389 -j ACCEPT iptables -t nat -A PREROUTING -p tcp --dport 3340 -i eth0 -j DNAT --to 192.168.2.12:3340 iptables -A FORWARD -p tcp --dport 3340 -j ACCEPT #toestaan SSH verkeer iptables -t nat -A PREROUTING -p tcp --dport 22 -i eth0 -j DNAT --to-destination 192.168.2.1 iptables -A INPUT -p tcp --dport 22 -j ACCEPT #toestaan verkeer loopback iptables -A INPUT -i lo -j ACCEPT #toestaan lokaal netwerk iptables -A INPUT -i eth1 -j ACCEPT #accepteren established traffic iptables -A INPUT -i eth0 --match state --state RELATED,ESTABLISHED -j ACCEPT #droppen ICMP boodschappen iptables -A INPUT -p icmp -i eth0 -m limit --limit 10/minute -j ACCEPT iptables -A INPUT -p icmp -i eth0 -j REJECT ifconfig eth1 192.168.2.1/24 ifconfig eth0 XXXXXXXXXXXXX/30 ifconfig eth0 up ifconfig eth1 up route add default gw XXXXXXXXXXXXXXXXXXX route add -net 192.168.3.0 netmask 255.255.255.240 gw 192.168.2.13

    Read the article

  • Encouter error "Linux ip -6 addr add failed" while setting up OpenVPN client

    - by Mickel
    I am trying to set up my router to use OpenVPN and have gotten quite far (I think), but something seems to be missing and I am not sure what. Here is my configuration for the client: client dev tun proto udp remote ovpn.azirevpn.net 1194 remote-random resolv-retry infinite auth-user-pass /tmp/password.txt nobind persist-key persist-tun ca /tmp/AzireVPN.ca.crt remote-cert-tls server reneg-sec 0 verb 3 OpenVPN client log: Nov 8 15:45:13 rc_service: httpd 15776:notify_rc start_vpnclient1 Nov 8 15:45:14 openvpn[27196]: OpenVPN 2.3.2 arm-unknown-linux-gnu [SSL (OpenSSL)] [LZO] [EPOLL] [MH] [IPv6] built on Nov 1 2013 Nov 8 15:45:14 openvpn[27196]: NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Nov 8 15:45:14 openvpn[27196]: Socket Buffers: R=[116736->131072] S=[116736->131072] Nov 8 15:45:14 openvpn[27202]: UDPv4 link local: [undef] Nov 8 15:45:14 openvpn[27202]: UDPv4 link remote: [AF_INET]178.132.75.14:1194 Nov 8 15:45:14 openvpn[27202]: TLS: Initial packet from [AF_INET]178.132.75.14:1194, sid=44d80db5 8b36adf9 Nov 8 15:45:14 openvpn[27202]: WARNING: this configuration may cache passwords in memory -- use the auth-nocache option to prevent this Nov 8 15:45:14 openvpn[27202]: VERIFY OK: depth=1, C=RU, ST=Moscow, L=Moscow, O=Azire Networks, OU=VPN, CN=Azire Networks, name=Azire Networks, [email protected] Nov 8 15:45:14 openvpn[27202]: Validating certificate key usage Nov 8 15:45:14 openvpn[27202]: ++ Certificate has key usage 00a0, expects 00a0 Nov 8 15:45:14 openvpn[27202]: VERIFY KU OK Nov 8 15:45:14 openvpn[27202]: Validating certificate extended key usage Nov 8 15:45:14 openvpn[27202]: ++ Certificate has EKU (str) TLS Web Server Authentication, expects TLS Web Server Authentication Nov 8 15:45:14 openvpn[27202]: VERIFY EKU OK Nov 8 15:45:14 openvpn[27202]: VERIFY OK: depth=0, C=RU, ST=Moscow, L=Moscow, O=AzireVPN, OU=VPN, CN=ovpn, name=ovpn, [email protected] Nov 8 15:45:15 openvpn[27202]: Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key Nov 8 15:45:15 openvpn[27202]: Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Nov 8 15:45:15 openvpn[27202]: Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key Nov 8 15:45:15 openvpn[27202]: Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Nov 8 15:45:15 openvpn[27202]: Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 2048 bit RSA Nov 8 15:45:15 openvpn[27202]: [ovpn] Peer Connection Initiated with [AF_INET]178.132.75.14:1194 Nov 8 15:45:17 openvpn[27202]: SENT CONTROL [ovpn]: 'PUSH_REQUEST' (status=1) Nov 8 15:45:17 openvpn[27202]: PUSH: Received control message: 'PUSH_REPLY,ifconfig-ipv6 2a03:8600:1001:4010::101f/64 2a03:8600:1001:4010::1,route-ipv6 2000::/3 2A03:8600:1001:4010::1,redirect-gateway def1 bypass-dhcp,dhcp-option DNS 194.1.247.30,tun-ipv6,route-gateway 178.132.77.1,topology subnet,ping 3,ping-restart 15,ifconfig 178.132.77.33 255.255.255.192' Nov 8 15:45:17 openvpn[27202]: OPTIONS IMPORT: timers and/or timeouts modified Nov 8 15:45:17 openvpn[27202]: OPTIONS IMPORT: --ifconfig/up options modified Nov 8 15:45:17 openvpn[27202]: OPTIONS IMPORT: route options modified Nov 8 15:45:17 openvpn[27202]: OPTIONS IMPORT: route-related options modified Nov 8 15:45:17 openvpn[27202]: OPTIONS IMPORT: --ip-win32 and/or --dhcp-option options modified Nov 8 15:45:17 openvpn[27202]: TUN/TAP device tun0 opened Nov 8 15:45:17 openvpn[27202]: TUN/TAP TX queue length set to 100 Nov 8 15:45:17 openvpn[27202]: do_ifconfig, tt->ipv6=1, tt->did_ifconfig_ipv6_setup=1 Nov 8 15:45:17 openvpn[27202]: /usr/sbin/ip link set dev tun0 up mtu 1500 Nov 8 15:45:18 openvpn[27202]: /usr/sbin/ip addr add dev tun0 178.132.77.33/26 broadcast 178.132.77.63 Nov 8 15:45:18 openvpn[27202]: /usr/sbin/ip -6 addr add 2a03:8600:1001:4010::101f/64 dev tun0 Nov 8 15:45:18 openvpn[27202]: Linux ip -6 addr add failed: external program exited with error status: 254 Nov 8 15:45:18 openvpn[27202]: Exiting due to fatal error Any ideas are most welcome!

    Read the article

  • RDS installation failure on 2012 R2 Server Core VM in Hyper-V Server

    - by Giles
    I'm currently installing a test-bed for my firms Infrastructure replacement. 10 or so Windows/Linux servers will be replaced by 2 physical servers running Hyper-V server. All services (DC, RDS, SQL) will be on Windows 2012 R2 Server Core VMs, Exchange on Server 2012 R2 GUI, and the rest are things like Elastix, MailArchiver etc, which aren't part of the equation thus far. I have installed Hyper-V server on a test box, and sucessfully got two virtual DC's running, SQL 2014 running, and 8.1 which I use for the RSAT tools. When trying to install RDS (The old fashioned kind, not the newer VDI(?) style), I get a failed installation due to the server not being able to reboot. A couple of articles have said not to do it locally, so I've moved on. Sitting at the Powershell prompt on the Domain Controller or SQL server (Both Server Core), I run the following commands: Import-Module RemoteDesktop New-SessionDeployment -ConnectionBroker "AlstersTS.Alsters.local" -SessionHost "AlstersTS.Alsters.local" The installation begins, carries on for 2 or 3 minutes, then I receive the following error message: New-SessionDeployment : Validation failed for the "RD Connection Broker" parameter. AlstersTS.Alsters.local Unable to connect to the server by using WindowsPowerShell remoting. Verify that you can connect to the server. At line:1 char:1 + NewSessionDeployment -ConnectionBroker "AlstersTS.Alsters.local" -SessionHost " ... + + CategoryInfo : NotSpecified: (:) [Write-Error], WriteErrorException + FullyQualifiedErrorID : Microsoft.PowerShell.Commands.WriteErrorException,New-SessionDeployment So far, I have: Triple, triple checked syntax. Tried various other commands, and a script to accomplish the same task. Checked DNS is functioning as it should. Checked to the best of my knowledge that AD is working as it should. Checked that the Network Service has the needed permissions. Created another VM and placed the two roles on different servers. Deleted all VMs, started again with a new domain name (Lather, rinse, repeat) Performed the whole installation on a second physical box running Hyper-V Server Pleaded with it Interestingly, if I perform the installation via a GUI installation, the thing just works! Now I know I could convert this to a Server Core role after installation, but this wouldn't teach me what was wrong in the first instance. I've probably got 10 pages through various Google searches, each page getting a little less relevant. The closest matches seem to have good information, but it doesn't seem to be the fix for my set-up. As a side note, I expected to be able to "tee" or "out-file" the error message into a text file, but couldn't get that to work either, so I've typed in the error message manually. Chaps, any suggestions, from the glaringly obvious, to the long-winded and complex? Thanks!

    Read the article

  • Gnome Install Error (1) [closed]

    - by Guy1984
    I'm trying to install Gnome on my Ubuntu 12.04 P.Pangolin and getting the following errors: root@***:~# sudo apt-get install gnome-core gnome-session-fallback Reading package lists... Done Building dependency tree Reading state information... Done gnome-core is already the newest version. gnome-session-fallback is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 4 not upgraded. 5 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up bluez (4.98-2ubuntu7) ... start: Job failed to start invoke-rc.d: initscript bluetooth, action "start" failed. dpkg: error processing bluez (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of gnome-bluetooth: gnome-bluetooth depends on bluez (>= 4.36); however: Package bluez is not configured yet. dpkg: error processing gnome-bluetooth (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of gnome-shell: gnome-shell depends on gnome-bluetooth (>= 3.0.0); however: Package gnome-bluetooth is not configured yet. dpkg: error processing gnome-shell (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of gnome-user-share: gnome-user-share depends on gnome-bluetooth; however: Package gnome-bluetooth is not configured yet. dpkg: error processing gnome-user-share (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of gnome-core: gnome-core depends on gNo apport report written because the error message indicates its a followup error from a previous failure. No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already nome-bluetooth (>= 3.0); however: Package gnome-bluetooth is not configured yet. gnome-core depends on gnome-shell (>= 3.0); however: Package gnome-shell is not configured yet. gnome-core depends on gnome-user-share (>= 3.0); however: Package gnome-user-share is not configured yet. dpkg: error processing gnome-core (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: bluez gnome-bluetooth gnome-shell gnome-user-share gnome-core E: Sub-process /usr/bin/dpkg returned an error code (1) Any thoughts?

    Read the article

  • Gnome Install Error (1) [closed]

    - by Guy1984
    I'm trying to install Gnome on my Ubuntu 12.04 P.Pangolin and getting the following errors: root@***:~# sudo apt-get install gnome-core gnome-session-fallback Reading package lists... Done Building dependency tree Reading state information... Done gnome-core is already the newest version. gnome-session-fallback is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 4 not upgraded. 5 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up bluez (4.98-2ubuntu7) ... start: Job failed to start invoke-rc.d: initscript bluetooth, action "start" failed. dpkg: error processing bluez (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of gnome-bluetooth: gnome-bluetooth depends on bluez (>= 4.36); however: Package bluez is not configured yet. dpkg: error processing gnome-bluetooth (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of gnome-shell: gnome-shell depends on gnome-bluetooth (>= 3.0.0); however: Package gnome-bluetooth is not configured yet. dpkg: error processing gnome-shell (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of gnome-user-share: gnome-user-share depends on gnome-bluetooth; however: Package gnome-bluetooth is not configured yet. dpkg: error processing gnome-user-share (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of gnome-core: gnome-core depends on gNo apport report written because the error message indicates its a followup error from a previous failure. No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already nome-bluetooth (>= 3.0); however: Package gnome-bluetooth is not configured yet. gnome-core depends on gnome-shell (>= 3.0); however: Package gnome-shell is not configured yet. gnome-core depends on gnome-user-share (>= 3.0); however: Package gnome-user-share is not configured yet. dpkg: error processing gnome-core (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: bluez gnome-bluetooth gnome-shell gnome-user-share gnome-core E: Sub-process /usr/bin/dpkg returned an error code (1) Any thoughts?

    Read the article

  • connect() failed (111: Connection refused) while connecting to upstream

    - by Burning the Codeigniter
    I'm experiencing 502 gateway errors when accessing a PHP file in a directory (http://domain.com/dev/index.php), the logs simply says this: 2011/09/30 23:47:54 [error] 31160#0: *35 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xx.xx.xx, server: domain.com, request: "GET /dev/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "domain.com" I've never experienced this before, how do I do a solution for this type of 502 gateway error? This is the nginx.conf: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } #mail { # # See sample authentication script at: # # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript # # # auth_http localhost/auth.php; # # pop3_capabilities "TOP" "USER"; # # imap_capabilities "IMAP4rev1" "UIDPLUS"; # # server { # listen localhost:110; # protocol pop3; # proxy on; # } # # server { # listen localhost:143; # protocol imap; # proxy on; # } #}

    Read the article

  • Server.CreateObject Fails when calling .Net object from ASP on 64-bit windows in IIS 32-bit mode

    - by DrFredEdison
    I have a server running Windows 2003 64-bit, that runs IIS in 32-bit mode. I have a COM object that was registered using the following command: C:\WINDOWS\microsoft.net\Framework\v2.0.50727>regasm D:\Path\To\MyDll.dll /tlb:MyTLB.tlb /codebase When I create the object via ASP I get: Server object error 'ASP 0177 : 8000ffff' Server.CreateObject Failed /includes/a_URLFilter.asp, line 19 8000ffff When I create the object in a vbs script and use the 32-bit version of cscript (in \Windows\syswow64) it works fine. I've checked permissions on the DLL, and the IUSR has Read/Execute. Even if I add the IUSR to the Administrators group, I get the same error. This is the log from ProcessMonitor filtering for the path of my dll (annotated with my actions): [Stop IIS] 1:56:30.0891918 PM w3wp.exe 4088 CloseFile D:\Path\To\MyDll.dll SUCCESS [Start IIS] [Refresh ASP page that uses DLL] 1:56:42.7825154 PM w3wp.exe 2196 QueryOpen D:\Path\To\MyDll.dll SUCCESS CreationTime: 8/19/2009 1:11:17 PM, LastAccessTime: 8/19/2009 1:30:26 PM, LastWriteTime: 8/18/2009 12:09:33 PM, ChangeTime: 8/19/2009 1:22:02 PM, AllocationSize: 20,480, EndOfFile: 20,480, FileAttributes: A 1:56:42.7825972 PM w3wp.exe 2196 QueryOpen D:\Path\To\MyDll.dll SUCCESS CreationTime: 8/19/2009 1:11:17 PM, LastAccessTime: 8/19/2009 1:30:26 PM, LastWriteTime: 8/18/2009 12:09:33 PM, ChangeTime: 8/19/2009 1:22:02 PM, AllocationSize: 20,480, EndOfFile: 20,480, FileAttributes: A 1:56:42.7826961 PM w3wp.exe 2196 CreateFile D:\Path\To\MyDll.dll SUCCESS Desired Access: Generic Read, Disposition: Open, Options: Synchronous IO Non-Alert, Non-Directory File, Attributes: N, ShareMode: Read, Delete, AllocationSize: n/a, Impersonating: SERVER2\IUSR_SERVER2, OpenResult: Opened 1:56:42.7827194 PM w3wp.exe 2196 CreateFileMapping D:\Path\To\MyDll.dll SUCCESS SyncType: SyncTypeCreateSection, PageProtection: 1:56:42.7827546 PM w3wp.exe 2196 CreateFileMapping D:\Path\To\MyDll.dll SUCCESS SyncType: SyncTypeOther 1:56:42.7829130 PM w3wp.exe 2196 Load Image D:\Path\To\MyDll.dll SUCCESS Image Base: 0x6350000, Image Size: 0x8000 1:56:42.7830590 PM w3wp.exe 2196 Load Image D:\Path\To\MyDll.dll SUCCESS Image Base: 0x6360000, Image Size: 0x8000 1:56:42.7838855 PM w3wp.exe 2196 CreateFile D:\Webspace\SecurityDll\bin SUCCESS Desired Access: Read Data/List Directory, Synchronize, Disposition: Open, Options: Directory, Synchronous IO Non-Alert, Attributes: n/a, ShareMode: Read, Write, Delete, AllocationSize: n/a, Impersonating: SERVER2\IUSR_SERVER2, OpenResult: Opened 1:56:42.7839081 PM w3wp.exe 2196 QueryDirectory D:\Path\To\MyDll.INI NO SUCH FILE Filter: SecurityDll.INI 1:56:42.7839281 PM w3wp.exe 2196 CloseFile D:\Webspace\SecurityDll\bin SUCCESS [Refresh ASP page that uses DLL] [Refresh ASP page that uses DLL] [Refresh ASP page that uses DLL] This dll works fine on other servers, running 32-bit windows. I can't think of anything else that would make this work. Any suggestions? UPDATE The .dll is not in the GAC, it is compiled as 32-bit, and is Strongly signed.

    Read the article

  • Getting macro keys from a razer blackwidow to work on linux

    - by Journeyman Geek
    I picked up a razer blackwidow ultimate that has additional keys meant for macros that are set using a tool that's installed on windows. I'm assuming that these arn't some fancypants joojoo keys and should emit scancodes like any other keys. Firstly is there a standard way to check these scancodes in linux? Secondly how do i set these keys to do things in command line and x based linux setups? My current linux install is xubuntu 10.10, but i'll be switching to kubuntu once i have a few things fixed up. Ideally the answer should be generic and system-wide Things i have tried so far: showkeys from the built in kbd package (in a seperate vt) - macro keys not detected xev - macro keys not detected lsusb and evdev output this ahk script's output suggests the M keys are not outputting standard scancodes Things i need to try snoopy pro + reverse engineering (oh dear) Wireshark - preliminary futzing around seems to indicate no scancodes emitted when what i seem to think is the keyboard is monitored and keys pressed. Might indicate additional keys are a seperate device or need to be initialised somehow. Need to cross reference that with lsusb output from linux, in 3 scenarios - standalone, passed through to a windows VM without the drivers installed, and the same with. LSUSB only detects one device on a standalone linux install It might be useful to check if the mice use the same razer synapse driver , since that means some variation of razercfg might work (not detected. only seems to work for mice) Things i have Have worked out: In a windows system with the driver, the keyboard is seen as a keyboard and a pointing device. And said pointing device uses, in addition to your bog standard mouse drivers.. a driver for something called a razer synapse. Mouse driver seen in linux under evdev and lsusb as well Single Device under OS X apparently, though i have yet to try lsusb equivilent on that Keyboard goes into pulsing backlight mode in OS X upon initialisation with the driver. This should probably indicate that there's some initialisation sequence sent to the keyboard on activation. They are, in fact, fancypants joojoo keys. Extending this question a little I have access to a windows system so if i need to use any tools on that to help answer the question, its fine. I can also try it on systems with and without the config utility. The expected end result is still to make those keys usable on linux however. I also realise this is a very specific family of hardware. I would be willing to test anything that makes sense on a linux system if i have detailed instructions - this should open up the question to people who have linux skills, but no access to this keyboard The minimum end result i require I need these keys detected, and usable in any fashion on any of the current graphical mainstream ubuntu varients

    Read the article

  • Mac OS X behind OpenLDAP and Samba

    - by Sam Hammamy
    I have been battling for a week now to get my Mac (Mountain Lion) to authenticate on my home network's OpenLDAP and Samba. From several sources, like the Ubuntu community docs, and other blogs, and after a hell of a lot of trial and error and piecing things together, I have created a samba.ldif that will pass the smbldap-populate when combined with apple.ldif and I have a fully functional OpenLDAP server and a Samba PDC that uses LDAP to authenticate the OS X Machine. The problem is that when I login, the home directory is not created or pulled from the server. I get the following in system.log Sep 21 06:09:15 Sams-MacBook-Pro.local SecurityAgent[265]: User info context values set for sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got user: sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got ruser: (null) Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got service: authorization Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_principal_for_user(): no authauth availale for user. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_principal_for_user(): failed: 7 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Failed to determine Kerberos principal name. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Done cleanup3 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Kerberos 5 refuses you Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): pam_sm_authenticate: ntlm Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_acct_mgmt(): OpenDirectory - Membership cache TTL set to 1800. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_record_check_pwpolicy(): retval: 0 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Establishing credentials Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Got user: sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Context initialised Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): pam_sm_setcred: ntlm user sam doesn't have auth authority All that's great and good and I authenticate. Then I get CFPreferences: user home directory for user kCFPreferencesCurrentUser at /Network/Servers/172.17.148.186/home/sam is unavailable. User domains will be volatile. Failed looking up user domain root; url='file://localhost/Network/Servers/172.17.148.186/home/sam/' path=/Network/Servers/172.17.148.186/home/sam/ err=-43 uid=9000 euid=9000 If you're wondering where /Network/Servers/IP/home/sam comes from, it's from a couple of blogs that said the OpenLDAP attribute apple-user-homeDirectory should have that value and the NFSHomeDirectory on the mac should point to apple-user-homeDirectory I also set the attr apple-user-homeurl to <home_dir><url>smb://172.17.148.186/sam/</url><path></path></home_dir> which I found on this forum. Any help is appreciated, because I'm banging my head against the wall at this point. By the way, I intend to create a blog on my vps just for this, and create an install script in python that people can download so no one has to go through what I've had to go through this week :) After some sleep I am going to try to login from a windows machine and report back here. Thanks Sam

    Read the article

  • Apache2 cgi's crash on odbc db access (but run fine from shell)

    - by Martin
    Problem overview (details below): I'm having an apache2 + ruby integration problem when trying to connect to an ODBC data source. The main problem boils down to the fact that scripts that run fine from an interactive shell crash ruby on the database connect line when run as a cgi from apache2. Ruby cgi's that don't try to access the ODBC datasource work fine. And (again) ruby scripts that connect to a database with ODBC do fine when executed from the command line (or cron). This behavior is identical when I use perl instead of ruby. So, the issue seems to be with the environment provided for ruby (perl) by apache2, but I can't figure out what is wrong or what to do about it. Does anyone have any suggestions on how to get these cgi scripts to work properly? I've tried many different things to get this to work, and I'm happy to provide more detail of any aspect if that will help. Details: Mac OS X Server 10.5.8 Xserve 2 x 2.66 Dual-Core Intel Xeon (12 GB) Apache 2.2.13 ruby 1.8.6 (2008-08-11 patchlevel 287) [universal-darwin9.0] ruby-odbc 0.9997 dbd-odbc (0.2.5) dbi (0.4.3) mod_ruby 1.3.0 Perl -- 5.8.8 DBI -- 1.609 DBD::ODBC -- 1.23 odbc driver: DataDirect SequeLink v5.5 (/Library/ODBC/SequeLink.bundle/Contents/MacOS/ivslk20.dylib) odbc datasource: FileMaker Server 10 (v10.0.2.206) ) a minimal version of a script (anonymized) that will crash in apache but run successfully from a shell: #!/usr/bin/ruby require 'cgi' require 'odbc' cgi = CGI.new("html3") aConnection = ODBC::connect('DBFile', "username", 'password') aQuery = aConnection.prepare("SELECT zzz_kP_ID FROM DBTable WHERE zzz_kP_ID = 81044") aQuery.execute aRecord = aQuery.fetch_hash.inspect aQuery.drop aConnection.disconnect # aRecord = '{"zzz_kP_ID"=>81044.0}' cgi.out{ cgi.html{ cgi.body{ "<pre>Primary Key: #{aRecord}</pre>" } } } Example of running this from a shell: gamma% ./minimal.rb (offline mode: enter name=value pairs on standard input) Content-Type: text/html Content-Length: 134 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><HTML><BODY><pre>Primary Key: {"zzz_kP_ID"=>81044.0}</pre></font></BODY></HTML>% gamma% ) typical crash log lines: Dec 22 14:02:38 gamma ReportCrash[79237]: Formulating crash report for process perl[79236] Dec 22 14:02:38 gamma ReportCrash[79237]: Saved crashreport to /Library/Logs/CrashReporter/perl_2009-12-22-140237_HTCF.crash using uid: 0 gid: 0, euid: 0 egid: 0 Dec 22 14:03:13 gamma ReportCrash[79256]: Formulating crash report for process perl[79253] Dec 22 14:03:13 gamma ReportCrash[79256]: Saved crashreport to /Library/Logs/CrashReporter/perl_2009-12-22-140311_HTCF.crash using uid: 0 gid: 0, euid: 0 egid: 0

    Read the article

  • Secure ldap problem

    - by neverland
    I have tried to config my openldap to have secure connection by using openssl on Debian5. By the way, I got trouble during the below command. ldap:/etc/ldap# slapd -h 'ldap:// ldaps://' -d1 >>> slap_listener(ldaps://) connection_get(15): got connid=7 connection_read(15): checking for input on id=7 connection_get(15): got connid=7 connection_read(15): checking for input on id=7 connection_get(15): got connid=7 connection_read(15): checking for input on id=7 connection_get(15): got connid=7 connection_read(15): checking for input on id=7 connection_read(15): unable to get TLS client DN, error=49 id=7 connection_get(15): got connid=7 connection_read(15): checking for input on id=7 ber_get_next ber_get_next on fd 15 failed errno=0 (Success) connection_closing: readying conn=7 sd=15 for close connection_close: conn=7 sd=15 Then I have search for "unable to get TLS client DN, error=49 id=7" but it seems no where has a good solution to this yet. Please help. Thanks # Well, I try to fix something to get it work but now I got this ldap:~# slapd -d 256 -f /etc/openldap/slapd.conf @(#) $OpenLDAP: slapd 2.4.11 (Nov 26 2009 09:17:06) $ root@SD6-Casa:/tmp/buildd/openldap-2.4.11/debian/build/servers/slapd could not stat config file "/etc/openldap/slapd.conf": No such file or directory (2) slapd stopped. connections_destroy: nothing to destroy. What should I do now? log : ldap:~# /etc/init.d/slapd start Starting OpenLDAP: slapd - failed. The operation failed but no output was produced. For hints on what went wrong please refer to the system's logfiles (e.g. /var/log/syslog) or try running the daemon in Debug mode like via "slapd -d 16383" (warning: this will create copious output). Below, you can find the command line options used by this script to run slapd. Do not forget to specify those options if you want to look to debugging output: slapd -h 'ldaps:///' -g openldap -u openldap -f /etc/ldap/slapd.conf ldap:~# tail /var/log/messages Feb 8 16:53:27 ldap kernel: [ 123.582757] intel8x0_measure_ac97_clock: measured 57614 usecs Feb 8 16:53:27 ldap kernel: [ 123.582801] intel8x0: measured clock 172041 rejected Feb 8 16:53:27 ldap kernel: [ 123.582825] intel8x0: clocking to 48000 Feb 8 16:53:27 ldap kernel: [ 131.469687] Adding 240932k swap on /dev/hda5. Priority:-1 extents:1 across:240932k Feb 8 16:53:27 ldap kernel: [ 133.432131] EXT3 FS on hda1, internal journal Feb 8 16:53:27 ldap kernel: [ 135.478218] loop: module loaded Feb 8 16:53:27 ldap kernel: [ 141.348104] eth0: link up, 100Mbps, full-duplex Feb 8 16:53:27 ldap rsyslogd: [origin software="rsyslogd" swVersion="3.18.6" x-pid="1705" x-info="http://www.rsyslog.com"] restart Feb 8 16:53:34 ldap kernel: [ 159.217171] NET: Registered protocol family 10 Feb 8 16:53:34 ldap kernel: [ 159.220083] lo: Disabled Privacy Extensions

    Read the article

  • Error Installing ruby with RVM Single User mode on Arch Linux

    - by ChrisBurnor
    I've just installed RVM on ArchLinux x64 in single user mode via the recommended install script curl -L https://get.rvm.io | bash -s stable I've also installed all the requirements listed in rvm requirements However, I'm having trouble actually installing any version of ruby. And getting the following error: arch:~ % rvm install 1.9.3 No binary rubies available for: ///ruby-1.9.3-p194. Continuing with compilation. Please read 'rvm mount' to get more information on binary rubies. Fetching yaml-0.1.4.tar.gz to /home/christopher/.rvm/archives % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 460k 100 460k 0 0 702k 0 --:--:-- --:--:-- --:--:-- 767k Extracting yaml-0.1.4.tar.gz to /home/christopher/.rvm/src Prepare yaml in /home/christopher/.rvm/src/yaml-0.1.4. Configuring yaml in /home/christopher/.rvm/src/yaml-0.1.4. Error running ' ./configure --prefix=/home/christopher/.rvm/usr ', please read /home/christopher/.rvm/log/ruby-1.9.3-p194/yaml/configure.log Compiling yaml in /home/christopher/.rvm/src/yaml-0.1.4. Error running 'make', please read /home/christopher/.rvm/log/ruby-1.9.3-p194/yaml/make.log Please note that it's required to reinstall all rubies: rvm reinstall all --force Installing Ruby from source to: /home/christopher/.rvm/rubies/ruby-1.9.3-p194, this may take a while depending on your cpu(s)... ruby-1.9.3-p194 - #downloading ruby-1.9.3-p194, this may take a while depending on your connection... ruby-1.9.3-p194 - #extracting ruby-1.9.3-p194 to /home/christopher/.rvm/src/ruby-1.9.3-p194 ruby-1.9.3-p194 - #extracted to /home/christopher/.rvm/src/ruby-1.9.3-p194 Skipping configure step, 'configure' does not exist, did autoreconf not run successfully? ruby-1.9.3-p194 - #compiling Error running 'make', please read /home/christopher/.rvm/log/ruby-1.9.3-p194/make.log There has been an error while running make. Halting the installation. The log files are as follows: arch:~ % cat ~/.rvm/log/ruby-1.9.3-p194/yaml/configure.log __rvm_log_command:32: permission denied: arch:~ % cat ~/.rvm/log/ruby-1.9.3-p194/yaml/make.log make: *** No targets specified and no makefile found. Stop. arch:~ % cat ~/.rvm/log/ruby-1.9.3-p194/make.log make: *** No targets specified and no makefile found. Stop.

    Read the article

  • backup and restoration of a freeipa infrastructure

    - by Sirex
    I'm finding the documentation on ipa server backup and restoration sadly lacking, and being so centrally critical it's not something i'm really happy about shooting in the dark with - could some kind soul more knowledable in the matter please attempt to provide an idiot-proof guide to backing up and restoring of IPA server(s) ? Particularly the main server (the cert signing one). ...We're looking towards rolling out ipa in a two server setup (1 master, 1 replica). I'm using dns srv records to handle failover, hence a loss of the replica isn't a big deal as i could make a new one and force a resync to happen - it's losing the master that bothered me. The thing that i'm really struggling with is locating a step-by-step procedure for backing up and restoring the master server. I'm aware that whole-VM snapshot is the recommended way of doing IPA server backup, but that isn't an option at this time for us. I'm also aware that freeipa 3.2.0 includes some sort of backup command build in, but that isn't in the ipa version of centos, and i don't expect it will be for some time yet. I've been trying many different methods, but none of them seem to restore cleanly, amongst others, i've tried; a command similar to db2ldif.pl -D "cn=directory manager" -w - -n userroot -a /root/userroot.ldif the script from here to produce three ldif files -- one for the domain ({domain}-userroot), and two for the ipa server (ipa-ipaca and ipa-userroot): Most of the restores i've tried have been similar to the form of: ldif2db.pl -D "cn=directory manager" -w - -n userroot -i userroot.ldif which seems to work and reports no errors, but totally borks the ipa install on the machine and i can no longer login with either the admin password on the backed up server, or the one i set it to on installation before attempting the ldif2db command (i'm installing ipa-server and running ipa-server-install, then attempting the restore). I'm not overly bothered about losing the CA, having to rejoin the domain, losing replication etc etc (although it'd be awesome if that could be avoided) but in the instance of the main server dropping i'd really like to avoid having to re-enter all the user/group information. I guess in the instance of losing the main server i could promote the other one and replicate in the other direction, but i've not tried that, either. Has anyone done that ? tl;dr: Can someone provide an idiots guide to backing up and restoring an IPA server (preferably on CentOS 6) in a clear enough way that'd make me feel confident it'll actually work when the dreaded time comes ? Crayons are optional, but appreciated ;-) I can't be the only person struggling with this, seeing how widely used IPA is, surely ?

    Read the article

  • Mac OS X behind OpenLDAP and Samba

    - by Sam Hammamy
    I have been battling for a week now to get my Mac (Mountain Lion) to authenticate on my home network's OpenLDAP and Samba. From several sources, like the Ubuntu community docs, and other blogs, and after a hell of a lot of trial and error and piecing things together, I have created a samba.ldif that will pass the smbldap-populate when combined with apple.ldif and I have a fully functional OpenLDAP server and a Samba PDC that uses LDAP to authenticate the OS X Machine. The problem is that when I login, the home directory is not created or pulled from the server. I get the following in system.log Sep 21 06:09:15 Sams-MacBook-Pro.local SecurityAgent[265]: User info context values set for sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got user: sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got ruser: (null) Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got service: authorization Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_principal_for_user(): no authauth availale for user. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_principal_for_user(): failed: 7 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Failed to determine Kerberos principal name. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Done cleanup3 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Kerberos 5 refuses you Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): pam_sm_authenticate: ntlm Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_acct_mgmt(): OpenDirectory - Membership cache TTL set to 1800. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_record_check_pwpolicy(): retval: 0 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Establishing credentials Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Got user: sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Context initialised Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): pam_sm_setcred: ntlm user sam doesn't have auth authority All that's great and good and I authenticate. Then I get CFPreferences: user home directory for user kCFPreferencesCurrentUser at /Network/Servers/172.17.148.186/home/sam is unavailable. User domains will be volatile. Failed looking up user domain root; url='file://localhost/Network/Servers/172.17.148.186/home/sam/' path=/Network/Servers/172.17.148.186/home/sam/ err=-43 uid=9000 euid=9000 If you're wondering where /Network/Servers/IP/home/sam comes from, it's from a couple of blogs that said the OpenLDAP attribute apple-user-homeDirectory should have that value and the NFSHomeDirectory on the mac should point to apple-user-homeDirectory I also set the attr apple-user-homeurl to <home_dir><url>smb://172.17.148.186/sam/</url><path></path></home_dir> which I found on this forum. Any help is appreciated, because I'm banging my head against the wall at this point. By the way, I intend to create a blog on my vps just for this, and create an install script in python that people can download so no one has to go through what I've had to go through this week :) After some sleep I am going to try to login from a windows machine and report back here. Thanks Sam

    Read the article

  • How to use bzdiff to find difference between 2 bzipped files with diff -I option?

    - by englebip
    I'm trying to do a diff on MySQL dumps (created with mysqldump and piped to bzip2), to see if there are changes between consecutive dumps. The followings are the tails of 2 dumps: tmp1: /*!40101 SET SQL_MODE=@OLD_SQL_MODE */; /*!40014 SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS */; /*!40014 SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS */; /*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */; /*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */; /*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */; /*!40111 SET SQL_NOTES=@OLD_SQL_NOTES */; -- Dump completed on 2011-03-11 1:06:50 tmp2: /*!40101 SET SQL_MODE=@OLD_SQL_MODE */; /*!40014 SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS */; /*!40014 SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS */; /*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */; /*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */; /*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */; /*!40111 SET SQL_NOTES=@OLD_SQL_NOTES */; -- Dump completed on 2011-03-11 0:40:11 When I bzdiff their bzipped version: $ bzdiff tmp?.bz2 10c10 < -- Dump completed on 2011-03-11 1:06:50 --- > -- Dump completed on 2011-03-11 0:40:11 According to the manual of bzdiff, any option passed on to bzdiff is passed on to diff. I therefore looked at the -I option that allows to define a regexp; lines matching it are ignored in the diff. When I then try: $ bzdiff -I'Dump' tmp1.bz2 tmp2.bz2 I get an empty diff. I would like to match as much as possible of the "Dump completed" line, though, but when I then try: $ bzdiff -I'Dump completed' tmp1.bz2 tmp2.bz2 diff: extra operand `/tmp/bzdiff.miCJEvX9E8' diff: Try `diff --help' for more information. Same thing happens for some variations: $ bzdiff '-IDump completed' tmp1.bz2 tmp2.bz2 $ bzdiff '-I"Dump completed"' tmp1.bz2 tmp2.bz2 $ bzdiff -'"IDump completed"' tmp1.bz2 tmp2.bz2 If I diff the un-bzipped files there is no problem: $diff -I'^[-][-] Dump completed on' tmp1 tmp2 gives also an empty diff. bzdiff is a shell script usually placed in /bin/bzdiff. Essentially, it parses the command options and passes them on to diff as follows: OPTIONS= FILES= for ARG do case "$ARG" in -*) OPTIONS="$OPTIONS $ARG";; *) if test -f "$ARG"; then FILES="$FILES $ARG" else echo "${prog}: $ARG not found or not a regular file" exit 1 fi ;; esac done [...] bzip2 -cdfq "$1" | $comp $OPTIONS - "$tmp" I think the problem stems from escaping the spaces in the passing of $OPTIONS to diff, but I couldn't figure out how to get it interpreted correctly. Any ideas? EDIT @DerfK: Good point with the ., I had forgotten about them... I tried the suggestion with the multiple level of quotes, but that is still not recognized: $ bzdiff "-I'\"Dump.completed.on\"'" tmp1.bz2 tmp2.bz2 diff: extra operand `/tmp/bzdiff.Di7RtihGGL'

    Read the article

  • How to properly add .NET assemblies to Powershell session?

    - by amandion
    I have a .NET assembly (a dll) which is an API to backup software we use here. It contains some properties and methods I would like to take advantage of in my Powershell script(s). However, I am running into a lot of issues with first loading the assembly, then using any of the types once the assembly is loaded. The complete file path is: C:\rnd\CloudBerry.Backup.API.dll In Powershell I use: $dllpath = "C:\rnd\CloudBerry.Backup.API.dll" Add-Type -Path $dllpath I get the error below: Add-Type : Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information. At line:1 char:9 + Add-Type <<<< -Path $dllpath + CategoryInfo : NotSpecified: (:) [Add-Type], ReflectionTypeLoadException + FullyQualifiedErrorId : System.Reflection.ReflectionTypeLoadException,Microsoft.PowerShell.Commands.AddTypeComma ndAdd-Type : Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information. Using the same cmdlet on another .NET assembly, DotNetZip, which has examples of using the same functionality on the site also does not work for me. I eventually find that I am seemingly able to load the assembly using reflection: [System.Reflection.Assembly]::LoadFrom($dllpath) Although I don't understand the difference between the methods Load, LoadFrom, or LoadFile that last method seems to work. However, I still seem to be unable to create instances or use objects. Each time I try, I get errors that describe that Powershell is unable to find any of the public types. I know the classes are there: $asm = [System.Reflection.Assembly]::LoadFrom($dllpath) $cbbtypes = $asm.GetExportedTypes() $cbbtypes | Get-Member -Static ---- start of excerpt ---- TypeName: CloudBerryLab.Backup.API.BackupProvider Name MemberType Definition ---- ---------- ---------- PlanChanged Event System.EventHandler`1[CloudBerryLab.Backup.API.Utils.ChangedEventArgs] PlanChanged(Sy... PlanRemoved Event System.EventHandler`1[CloudBerryLab.Backup.API.Utils.PlanRemoveEventArgs] PlanRemoved... CalculateFolderSize Method static long CalculateFolderSize() Equals Method static bool Equals(System.Object objA, System.Object objB) GetAccounts Method static CloudBerryLab.Backup.API.Account[], CloudBerry.Backup.API, Version=1.0.0.1, Cu... GetBackupPlans Method static CloudBerryLab.Backup.API.BackupPlan[], CloudBerry.Backup.API, Version=1.0.0.1,... ReferenceEquals Method static bool ReferenceEquals(System.Object objA, System.Object objB) SetProfilePath Method static System.Void SetProfilePath(string profilePath) ----end of excerpt---- Trying to use static methods fail, I don't know why!!! [CloudBerryLab.Backup.API.BackupProvider]::GetAccounts() Unable to find type [CloudBerryLab.Backup.API.BackupProvider]: make sure that the assembly containing this type is load ed. At line:1 char:42 + [CloudBerryLab.Backup.API.BackupProvider] <<<< ::GetAccounts() + CategoryInfo : InvalidOperation: (CloudBerryLab.Backup.API.BackupProvider:String) [], RuntimeException + FullyQualifiedErrorId : TypeNotFound Any guidance appreciated!!

    Read the article

  • Windows Question: RunOnce/Second Boot Issues

    - by Greg
    I am attempting to create a Windows XP SP3 image that will run my application on Second Boot. Here is the intended workflow. 1) Run Image Prep Utility (I wrote) on windows to add my runonce entries and clean a few things up. 2) Reboot to ghost, make image file. 3) Package into my ISO and distribute. 4) System will be imaged by user. 5) On first boot, I have about 5 things that run, one of which includes a driver updater (I wrote) for my own specific devices. 6) One of the entries inside of HKCU/../runonce is a reg file, which adds another key to HKLM/../runonce. This is how second boot is acquired. 7) As a result of the driver updater, user is prompted to reboot. 8) My application is then launched from HKLM/../runonce on second boot. This workflow works perfectly, except for a select few legacy systems that contain devices that cause the add hardware wizard to pop up. When the add hardware wizard pops up is when I begin to see problems. It's important to note, that if I manually inspect the registry after the add hardware wizard pops up, it appears as I would expect, with all the first boot scripts having run, and it's sitting in a state I would correctly expect it to be in for a second boot scenario. The problem comes when I click next on the add hardware wizard, it seems to re-run the single entry I've added, and re-executes the runonce scripts. (only one script now as it's already executed and cleared out the initial entries). This causes my application to open as if it were a second boot, only when next is clicked on the add hardware wizard. If I click cancel, and reboot, then it also works as expected. I don't care as much about other solutions, because I could design a system that doesn't fully rely on Microsoft's registry. I simply can't find any information as to WHY this is happening. I believe this is some type of Microsoft issue that's presenting itself as a result of an overstretched image that's expected to support too many legacy platforms, but any help that can be provided would be appreciated. Thanks,

    Read the article

  • Automatically Applying Security Updates for AWS Elastic Beanstalk

    - by Eric Anderson
    I've been a fan of Heroku since it's earliest days. But I like the fact that AWS Elastic Beanstalk gives you more control over the characteristics of the instances. One thing I love about Heroku is the fact that I can deploy an app and not worry about managing it. I am assuming Heroku is ensuring all OS security updates are timely applied. I just need to make sure my app is secure. My initial research on Beanstalk shows that although it builds and configures the instances for you, after that it moves to a more manual management process. Security updates won't automatically be applied to the instances. It seems there are two areas of concerns: New AMI releases - As new AMI releases hit it seems we would want to run the latest (presumably most secure). But my research seems to indicate you need to manually launch a new setup to see the latest AMI version and then create a new environment to use that new version. Is there a better automated way of rotating your instances into new AMI releases? In between releases there will be security updates released for packages. Seems we want to upgrade those as well. My research seems to indicate people install commands to occasionally run a yum update. But since new instances are created/destroyed based on usage it seems that the new instances would not always have the updates (i.e. the time between the instance creation and the first yum update). So occasionally you will have instances that aren't patched. And you are also going to have instances constantly patching themselves until the new AMI release is applied. My other concern is that perhaps these security updates haven't gone through Amazon's own review (like the AMI releases do) and it might break my app to automatically update them. I know Dreamhost once had a 12 hour outage because they were applying debian updates completely automatically without any review. I want to make sure the same thing doesn't happen to me. So my question is does Amazon provide a way to offer fully managed PaaS like Heroku? Or is AWS Elastic Beanstalk really more of just a install script and after that you are on your own (other than the monitoring and deployment tools they provide)?

    Read the article

  • UNIX - mount: only root can do that

    - by Travesty3
    I need to allow a non-root user to mount/unmount a device. I am a total noob when it comes to UNIX, so please dumb it down for me. I've been looking all over teh interwebz to find an answer and it seems everyone is giving the same one, which is to modify /etc/fstab to include that device with the 'user' option (or 'users', tried both). Cool, well I did that and it still says "mount: only root can do that". Here are the contents of my fstab: # /etc/fstab: static file system information. # # Use 'vol_id --uuid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # proc /proc proc defaults 0 0 # / was on /dev/mapper/minicc-root during installation UUID=1a69f02a-a049-4411-8c57-ff4ebd8bb933 / ext3 relatime,errors=remount-ro 0 1 # /boot was on /dev/sda5 during installation UUID=038498fe-1267-44c4-8788-e1354d71faf5 /boot ext2 relatime 0 2 # swap was on /dev/mapper/minicc-swap_1 during installation UUID=0bb583aa-84a8-43ef-98c4-c6cb25d20715 none swap sw 0 0 /dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec,utf8 0 0 /dev/scd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0 /dev/sdb1 /mnt/sdcard auto auto,user,rw,exec 0 0 My thumb drive partition shows up as /dev/sdb1. I'm pretty sure my fstab is set up OK, but everyone on the other posts seems to fail to mention how they actually call the 'mount' command once this entry is in the fstab file. I think this is where my problem may be. The command I use to mount the drive is: $ mount /dev/sdb1 /mnt/sdcard. /bin/mount is owned by root and is in the root group and has 4755 permissions. /bin/umount is owned by root and is in the root group and has 4755 permissions. /mnt/sdcard is owned by me and is in one of my groups and has 0755 permissions. My mount command works fine if I use sudo, but I need to be able to do this without sudo (need to be able to do it from a PHP script using shell_exec). Any suggestions? Sorry for making you read so much...just trying to get as much info in the initial post as possible to preemptively answer questions about configuration stuff. If I missed anything tho, ask away. Thanks! -Travis

    Read the article

  • sftp and public keys

    - by Lizard
    I am trying to sftp into an a server hosted by someone else. To make sure this worked I did the standard sftp [email protected] i was promted with the password and that worked fine. I am setting up a cron script to send a file once a week so have given them our public key which they claim to have added to their authorized_keys file. I now try sftp [email protected] again and I am still prompted for a password, but now the password doesn't work... Connecting to [email protected]... [email protected]'s password: Permission denied, please try again. [email protected]'s password: Permission denied, please try again. [email protected]'s password: Permission denied (publickey,password). Couldn't read packet: Connection reset by peer I did notice however that if I simply pressed enter (no password) it logged me in fine... So here are my questions: Is there a way to check what privatekey/pulbickey pair my sftp connection is using? Is it possible to specify what key pair to use? If all is setup correctly (using correct key pair and added to authorized files) why am I being asked to enter a blank password? Thanks for your help in advance! UPDATE I have just run sftp -vvv [email protected] .... debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering public key: /root/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Server accepts key: pkalg ssh-rsa blen 277 debug2: input_userauth_pk_ok: SHA1 fp 45:1b:e7:b6:33:41:1c:bb:0f:e3:c1:0f:1b:b0:d5:e4:28:a3:3f:0e debug3: sign_and_send_pubkey debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey,password debug1: Trying private key: /root/.ssh/id_dsa debug3: no such identity: /root/.ssh/id_dsa debug2: we did not send a packet, disable method debug3: authmethod_lookup password debug3: remaining preferred: ,password debug3: authmethod_is_enabled password debug1: Next authentication method: password It seems to suggest that it tries to use the public key... What am I missing?

    Read the article

  • NTPD seems to delete all network interfaces

    - by Aurelin
    We have a couple of virtual interfaces configured on eth0 on a CentOS, and every now and then, they went down seemingly out of the blue. Now after going through the log files, I found out that apparently ntpd deletes all eth0 interfaces, and that dhclient automatically brings eth0 back up. The virtual interfaces, however, stay down which causes several of our websites to be inaccessible. Can someone explain to me why ntpd deletes interfaces? Can / should that be turned off, or can / should I configure dhclient to bring the virtual interfaces back up automatically, too? EDIT// The log files that I should've posted : Nov 12 13:10:28 raptor dhclient[20048]: DHCPREQUEST on eth0 to 255.255.255.255 port 67 (xid=0x6a825e97) Nov 12 13:10:42 raptor dhclient[20048]: DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 8 (xid=0x24554092) Nov 12 13:10:42 raptor dhclient[20048]: DHCPOFFER from 96.126.108.78 Nov 12 13:10:42 raptor dhclient[20048]: DHCPREQUEST on eth0 to 255.255.255.255 port 67 (xid=0x24554092) Nov 12 13:10:42 raptor dhclient[20048]: DHCPACK from 96.126.108.78 (xid=0x24554092) Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #31 eth0, 50.116.50.97#123, interface stats: received=3255, sent=3256, dropped=0, active_time=1559394 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #32 eth0:0, 50.116.53.56#123, interface stats: received=3, sent=0, dropped=0, active_time=1559391 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #33 eth0:1, 66.175.211.192#123, interface stats: received=2, sent=0, dropped=0, active_time=1559389 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #34 eth0:2, 50.116.53.95#123, interface stats: received=3, sent=0, dropped=0, active_time=1559387 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #35 eth0:3, 97.107.132.32#123, interface stats: received=2, sent=0, dropped=0, active_time=1559385 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #36 eth0:4, 50.116.56.201#123, interface stats: received=2, sent=0, dropped=0, active_time=1559383 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #37 eth0:5, 66.175.212.121#123, interface stats: received=2, sent=0, dropped=0, active_time=1559381 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #38 eth0:6, 66.175.215.137#123, interface stats: received=2, sent=0, dropped=0, active_time=1559379 secs Nov 12 13:10:44 raptor NET[1573]: /sbin/dhclient-script : updated /etc/resolv.conf Nov 12 13:10:44 raptor dhclient[20048]: bound to 50.116.50.97 -- renewal in 32692 seconds. Nov 12 13:10:45 raptor ntpd[2109]: Listening on interface #39 eth0, 50.116.50.97#123 Enabled The eth0 config : DEVICE="eth0" ONBOOT="yes" BOOTPROTO="dhcp" IPV6INIT="no" IPADDR=50.116.50.97 NETMASK=255.255.255.0 GATEWAY=50.116.50.1 And the virtual interfaces (I posted the first one only, they look the same for the most part) : # Configuration for eth0:0 DEVICE=eth0:0 BOOTPROTO=none # This line ensures that the interface will be brought up during boot. ONBOOT=yes # eth0:0 IPADDR=50.116.53.56 NETMASK=255.255.255.0

    Read the article

< Previous Page | 851 852 853 854 855 856 857 858 859 860 861 862  | Next Page >