Search Results

Search found 40031 results on 1602 pages for 'command message'.

Page 432/1602 | < Previous Page | 428 429 430 431 432 433 434 435 436 437 438 439  | Next Page >

  • Retrieve user details from Active Directory using SID

    - by er4z0r
    Hi, How can I find a user in my AD when I have his/her SID. I don't want to rely on other attributes, since I am trying to detect changes to these. Example: I get a message about a change to user record containing: Message: User Account Changed: Target Account Name: test12 Target Domain: DOMAIN Target Account ID: %{S-1-5-21-3968247570-3627839482-368725868-1110} Caller User Name: Administrator Caller Domain: DOMAIN Caller Logon ID: (0x0,0x62AB1) Privileges: - I want to notify the user about the change. So I need their account-information from AD. When I try to retrieve the user's data from AD via VBScript like this: Wscript.StdOut.writeLine "Found an Account ID: " & objMatch.value Set objUser = GetObject("LDAP://GUID=1521396824757036278394823687258681110") Wscript.StdOut.writeLine objUser I receive an error stating The handle is invalid Code:80070006

    Read the article

  • NSClient++: external script with optional arguments

    - by syneticon-dj
    I am trying to define an external script which would take optional arguments in NSClient++ 0.4.1 on Windows. Following the nsclient-full.ini example code I have defined mycheck=cmd /C echo C:\mydir\myscript.ps1 %ARGS% | powershell.exe -command - which simply yields the string %ARGS% passed as the only argument to myscript.ps1, no matter what I specify in my call through NRPE (using Nagios' check_nrpe if that matters). I then tried to rewrite the definition to mycheck=cmd /C echo C:\mydir\myscript.ps1 $ARG1$ $ARG2$ | powershell.exe -command - (myscript.ps1 would take up to two arguments), which does help a bit. At least, if two arguments are provided, I can fetch them via the args[] array. The trouble starts when the call has less than two arguments - in this case the literal strings $ARG2 and $ARG1$ are passed through as arguments. Handling this case in the code of myscript.ps1 makes the whole argument processing routine ugly at best. Is there a sane way of defining optional parameters to an external script which would not pass NSClient's variable names if no parameter has been specified?

    Read the article

  • Exchange 2010 setup /prepareAD fails to run

    - by MadBoy
    I've tried installing Exchange 2010 on Windows Server 2008 R2 (only domain controller and all-in-one system). I did setup.exe /prepareAD, setup /prepareSchema and it worked fine the first time I did it. Unfortunately due to problem with Hub Transport installation related to (at least from what I've read) IPv6 being disabled (some say disabling it helped them while some enabling helped them). I did it the proper way by using registry entry to disable IPv6 but it still errored out. So i managed to uninstall everything (renamed some old entries in registry of failed Hub Transport roles and tried to reinstal Exchange after rebooting server. Unfortunetly running setup /prepareAD now gives an error: D:setup /PrepareAd Welcome to Microsoft Exchange Server 2010 Unattended Setup By continuing the installation process, you agree to the license terms of Microsoft Exchange Server 2010. If you don't accept these license terms, please cancel the installation. To review these license terms, please go to http://go.microsoft.com/fwlink/?LinkId=150127&clcid=0x409/ Press any key to cancel setup................ No key presses were detected. Setup will continue. Preparing Exchange Setup Copying Setup Files ......................... COMPLETED No server roles will be installed Performing Microsoft Exchange Server Prerequisite Check Organization Checks ......................... COMPLETED Setup is going to prepare the organization for Exchange 2010 by using 'Setup /P repareAD'. No Exchange 2007 server roles have been detected in this topology. Af ter this operation, you will not be able to install any Exchange 2007 server rol es. Configuring Microsoft Exchange Server Organization Preparation ......................... FAILED The following error was generated when "$error.Clear(); buildToBuildUpgrade -ExsetDataAtom -AtomName OrgLevelCt -DomainController $RoleDomainController" was run: "An error occurred with error code '2147504140' and message 'The data type can't be converted to or from a native Active Directory data type.'.". The Exchange Server setup operation did not complete. Visit http://support.micro soft.com and enter the Error ID to find more information. Exchange Server setup encountered an error. Unfortunetly if i rerun the setup it complains that it needs setup /prepareAD to be run first. Basically all that works now is setup /PrepareSchema and setup /PrepareDomain complains that prepareAD wasn't done. For full information I'm also attaching error I had before I've uninstalled everything and tried again: Hub Transport Role Failed Error: The following error was generated when "$error.Clear(); install-ExsetdataAtom -AtomName SharedMachineSettings -DomainController $RoleDomainController" was run: "An error occurred with error code '2147950640' and message 'There is no such object on the server.'.". An error occurred with error code '2147950640' and message 'There is no such object on the server.'.

    Read the article

  • Renci.SSHNet and HP ILO 4

    - by Andrew J. Brehm
    I am using Renci.SSHNet to connect to HP iLO processors. Generally this works fine and I can connect and run several commands and disconnect. However, I noticed that a few new servers that use iLO 4 simply don't react to any but the first command sent. When I login using Putty everything works fine, but when using an SSH connection with Renci only the first command sent is recognised whereas the second and further commands do not cause any reaction whatsoever by the iLO processor, not even an error message. Any ideas why that might be?

    Read the article

  • Encouter error "Linux ip -6 addr add failed" while setting up OpenVPN client

    - by Mickel
    I am trying to set up my router to use OpenVPN and have gotten quite far (I think), but something seems to be missing and I am not sure what. Here is my configuration for the client: client dev tun proto udp remote ovpn.azirevpn.net 1194 remote-random resolv-retry infinite auth-user-pass /tmp/password.txt nobind persist-key persist-tun ca /tmp/AzireVPN.ca.crt remote-cert-tls server reneg-sec 0 verb 3 OpenVPN client log: Nov 8 15:45:13 rc_service: httpd 15776:notify_rc start_vpnclient1 Nov 8 15:45:14 openvpn[27196]: OpenVPN 2.3.2 arm-unknown-linux-gnu [SSL (OpenSSL)] [LZO] [EPOLL] [MH] [IPv6] built on Nov 1 2013 Nov 8 15:45:14 openvpn[27196]: NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Nov 8 15:45:14 openvpn[27196]: Socket Buffers: R=[116736->131072] S=[116736->131072] Nov 8 15:45:14 openvpn[27202]: UDPv4 link local: [undef] Nov 8 15:45:14 openvpn[27202]: UDPv4 link remote: [AF_INET]178.132.75.14:1194 Nov 8 15:45:14 openvpn[27202]: TLS: Initial packet from [AF_INET]178.132.75.14:1194, sid=44d80db5 8b36adf9 Nov 8 15:45:14 openvpn[27202]: WARNING: this configuration may cache passwords in memory -- use the auth-nocache option to prevent this Nov 8 15:45:14 openvpn[27202]: VERIFY OK: depth=1, C=RU, ST=Moscow, L=Moscow, O=Azire Networks, OU=VPN, CN=Azire Networks, name=Azire Networks, [email protected] Nov 8 15:45:14 openvpn[27202]: Validating certificate key usage Nov 8 15:45:14 openvpn[27202]: ++ Certificate has key usage 00a0, expects 00a0 Nov 8 15:45:14 openvpn[27202]: VERIFY KU OK Nov 8 15:45:14 openvpn[27202]: Validating certificate extended key usage Nov 8 15:45:14 openvpn[27202]: ++ Certificate has EKU (str) TLS Web Server Authentication, expects TLS Web Server Authentication Nov 8 15:45:14 openvpn[27202]: VERIFY EKU OK Nov 8 15:45:14 openvpn[27202]: VERIFY OK: depth=0, C=RU, ST=Moscow, L=Moscow, O=AzireVPN, OU=VPN, CN=ovpn, name=ovpn, [email protected] Nov 8 15:45:15 openvpn[27202]: Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key Nov 8 15:45:15 openvpn[27202]: Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Nov 8 15:45:15 openvpn[27202]: Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key Nov 8 15:45:15 openvpn[27202]: Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Nov 8 15:45:15 openvpn[27202]: Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 2048 bit RSA Nov 8 15:45:15 openvpn[27202]: [ovpn] Peer Connection Initiated with [AF_INET]178.132.75.14:1194 Nov 8 15:45:17 openvpn[27202]: SENT CONTROL [ovpn]: 'PUSH_REQUEST' (status=1) Nov 8 15:45:17 openvpn[27202]: PUSH: Received control message: 'PUSH_REPLY,ifconfig-ipv6 2a03:8600:1001:4010::101f/64 2a03:8600:1001:4010::1,route-ipv6 2000::/3 2A03:8600:1001:4010::1,redirect-gateway def1 bypass-dhcp,dhcp-option DNS 194.1.247.30,tun-ipv6,route-gateway 178.132.77.1,topology subnet,ping 3,ping-restart 15,ifconfig 178.132.77.33 255.255.255.192' Nov 8 15:45:17 openvpn[27202]: OPTIONS IMPORT: timers and/or timeouts modified Nov 8 15:45:17 openvpn[27202]: OPTIONS IMPORT: --ifconfig/up options modified Nov 8 15:45:17 openvpn[27202]: OPTIONS IMPORT: route options modified Nov 8 15:45:17 openvpn[27202]: OPTIONS IMPORT: route-related options modified Nov 8 15:45:17 openvpn[27202]: OPTIONS IMPORT: --ip-win32 and/or --dhcp-option options modified Nov 8 15:45:17 openvpn[27202]: TUN/TAP device tun0 opened Nov 8 15:45:17 openvpn[27202]: TUN/TAP TX queue length set to 100 Nov 8 15:45:17 openvpn[27202]: do_ifconfig, tt->ipv6=1, tt->did_ifconfig_ipv6_setup=1 Nov 8 15:45:17 openvpn[27202]: /usr/sbin/ip link set dev tun0 up mtu 1500 Nov 8 15:45:18 openvpn[27202]: /usr/sbin/ip addr add dev tun0 178.132.77.33/26 broadcast 178.132.77.63 Nov 8 15:45:18 openvpn[27202]: /usr/sbin/ip -6 addr add 2a03:8600:1001:4010::101f/64 dev tun0 Nov 8 15:45:18 openvpn[27202]: Linux ip -6 addr add failed: external program exited with error status: 254 Nov 8 15:45:18 openvpn[27202]: Exiting due to fatal error Any ideas are most welcome!

    Read the article

  • Session state has been disabled for ASP.NET.The Report Viewer control requires that session state be

    - by Patrick Olurotimi Ige
    While i was trying to setup some reports on a SPF 2010 site. After trying to add a Sql Reporting Webpart i get the error: Session state has been disabled for ASP.NET. The Report Viewer control requires that session state be enabled in local mode I never came across that error before when using RSWebaprts in Sharepoint 2007:) But i think is related to ASP.NET sessions state:( Any to fix it you would have to start to make sure the SharePoint Server ASP.NET Session State Service is enabled. Unfortunately in Sharepiint SPF2010 and SP 2010 you would have to do it using PowerShell By doing :  Enable-SPSessionStateService -Defaultprovision PS C:\> PSSnapin - Shows you all the PS Snapins PS C:\> Add-PSSnapin "Microsoft.SharePoint.PowerShell" -- Adds the Sharepoint PS Snapin PS C:\> get-command -Noun SP* -- Show all SP cmdlets PS C:\> Add-SPShellAdmin -- (If you get error regarding the access to the farm you use this command to add an acct to able to run the Shell admin) PS C:\> Get-Help Enable-SPSessionStateService - Shows helpp for  Enable-SPSessionStateService PS C:\> Enable-SPSessionStateService -Defaultprovision - (enables/activates SPSessionStateService) you have more oprions available when you run the Get-Help Enable-SPSessionStateService   After running the cmd you should see the SharePoint Server ASP.NET Session State Service started in your service applications in the Central Admin Site. And of course be able to add your RS webparts Enjoy

    Read the article

  • Server 2012 AD-DS Setup Fails (Microsoft.Directory.Services.Deployment.DeepTasks.DeepTasks not found)

    - by Daniel Steiner
    Good Morning everyone, I am currently trying to promote my 2012 Server to a Domain Controller but when I am at the first step in the setup I get the Error Message (German, Original Message): [Bereitstellungskonfiguration] Fehler bei der Bestimmung, ob der Zielserver bereits ein Domänencontroller ist: Der Typ [Microsoft.Directory.Services.Deployment.DeepTasks.DeepTasks] wurde nicht gefunden: Vergewissern Sie sich, dass die Assembly, die diesen Typ enthält, geladen ist. (Translated to English): Error while determining, if the Targetserver already is a Domain Controller: The Type [Microsoft.Directory.Services.Deployment.DeepTasks.DeepTasks] was not found: Make sure, that the assembly, that contains this type, is loaded. Thus I can neither Configure the AD-DS nor deinstall them via Server Manager. Any Help how to fix that problem would be greatly appricieated.

    Read the article

  • Macports: port install jpeg fails

    - by Philipp Keller
    History: installed MacPorts on Leopard upgraded to Snow Leopard uninstall all ports reinstalled XCode sudo port uninstall jpeg sudo port install jpeg DEBUG: Found port in file:///opt/local/var/macports/sources/rsync.macports.org/release/ports/graphics/jpeg DEBUG: Changing to port directory: /opt/local/var/macports/sources/rsync.macports.org/release/ports/graphics/jpeg DEBUG: OS Platform: darwin DEBUG: OS Version: 10.3.0 DEBUG: Mac OS X Version: 10.6 DEBUG: System Arch: i386 DEBUG: setting option os.universal_supported to yes DEBUG: org.macports.load registered provides 'load', a pre-existing procedure. Target override will not be provided DEBUG: org.macports.unload registered provides 'unload', a pre-existing procedure. Target override will not be provided DEBUG: org.macports.distfiles registered provides 'distfiles', a pre-existing procedure. Target override will not be provided DEBUG: adding the default universal variant DEBUG: Reading variant descriptions from /opt/local/var/macports/sources/rsync.macports.org/release/ports/_resources/port1.0/variant_descriptions.conf DEBUG: Requested variant darwin is not provided by port jpeg. DEBUG: Requested variant i386 is not provided by port jpeg. DEBUG: Requested variant macosx is not provided by port jpeg. --- Computing dependencies for jpeg DEBUG: Executing org.macports.main (jpeg) DEBUG: Skipping completed org.macports.fetch (jpeg) DEBUG: Skipping completed org.macports.checksum (jpeg) DEBUG: Skipping completed org.macports.extract (jpeg) DEBUG: Skipping completed org.macports.patch (jpeg) --- Configuring jpeg DEBUG: Using compiler 'Mac OS X gcc 4.2' DEBUG: Executing org.macports.configure (jpeg) DEBUG: Environment: CFLAGS='-O2 -arch x86_64' CXXFLAGS='-O2 -arch x86_64' MACOSX_DEPLOYMENT_TARGET='10.6' CXX='/usr/bin/g++-4.2' F90FLAGS='-O2 -m64' LDFLAGS='-arch x86_64' OBJC='/usr/bin/gcc-4.2' FCFLAGS='-O2 -m64' INSTALL='/usr/bin/install -c' OBJCFLAGS='-O2 -arch x86_64' FFLAGS='-O2 -m64' CC='/usr/bin/gcc-4.2' DEBUG: Assembled command: 'cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_graphics_jpeg/work/jpeg-8a" && ./configure --prefix=/opt/local' sh: line 0: cd: /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_graphics_jpeg/work/jpeg-8a: No such file or directory Error: Target org.macports.configure returned: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_graphics_jpeg/work/jpeg-8a" && ./configure --prefix=/opt/local " returned error 1 DEBUG: Backtrace: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_graphics_jpeg/work/jpeg-8a" && ./configure --prefix=/opt/local " returned error 1 while executing "$procedure $targetname" Warning: the following items did not execute (for jpeg): org.macports.activate org.macports.configure org.macports.build org.macports.destroot org.macports.install Error: Status 1 encountered during processing. To report a bug, see http://guide.macports.org/#project.tickets

    Read the article

  • Need help with executing and deleting remote bash script via a local bash script

    - by kenja
    I am trying to create a bash script that will scp a script to a remote server, ssh (using an ssh key that is already installed) to the remote server, execute the uploaded script, and then delete the remote script when it is finished. I'm not clear how to run an ssh session inside a bash script. Here are the commands I use to do it from the command line: scp my_script.sh [email protected]:/usr/home/user/ ssh [email protected] >sh my_script.sh >rm myscript.sh >exit How do I script the ssh portion of my command list? Thanks!

    Read the article

  • Eclipse editor closing itself

    - by Rohit
    I have been facing this weird problem for some days ... When i try to open my eclipse editor it closes itself and just show some error message that is logged in my .log file .. I am pasting my log file content here .. help ! !ENTRY org.eclipse.jface 4 0 2012-09-23 01:45:17.443 !MESSAGE An error has occurred. See error log for more details. !STACK 0 org.eclipse.core.runtime.AssertionFailedException: assertion failed: The application has not been initialized. at org.eclipse.core.runtime.Assert.isTrue(Assert.java:110) at org.eclipse.core.internal.runtime.InternalPlatform.assertInitialized(InternalPlatform.java:148) at org.eclipse.core.internal.runtime.InternalPlatform.getAdapterManager(InternalPlatform.java:169) at org.eclipse.core.runtime.Platform.getAdapterManager(Platform.java:628) at org.eclipse.ui.part.IntroPart.getAdapter(IntroPart.java:143) at org.eclipse.ui.internal.ViewIntroAdapterPart.getAdapter(ViewIntroAdapterPart.java:117) at org.eclipse.ui.internal.util.Util.getAdapter(Util.java:109) at org.eclipse.ui.internal.services.WorkbenchSourceProvider.getShowInSource(WorkbenchSourceProvider.java:406) at org.eclipse.ui.internal.services.WorkbenchSourceProvider.getContext(WorkbenchSourceProvider.java:410) at org.eclipse.ui.internal.services.WorkbenchSourceProvider.updateActivePart(WorkbenchSourceProvider.java:478) at org.eclipse.ui.internal.services.WorkbenchSourceProvider.checkActivePart(WorkbenchSourceProvider.java:305) at org.eclipse.ui.internal.services.WorkbenchSourceProvider.checkActivePart(WorkbenchSourceProvider.java:300) at org.eclipse.ui.internal.services.WorkbenchSourceProvider$2.windowDeactivated(WorkbenchSourceProvider.java:270) at org.eclipse.ui.internal.Workbench$15.run(Workbench.java:1026) at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) at org.eclipse.ui.internal.Workbench.fireWindowDeactivated(Workbench.java:1024) at org.eclipse.ui.internal.WorkbenchWindow$29.shellDeactivated(WorkbenchWindow.java:3169) at org.eclipse.swt.widgets.TypedListener.handleEvent(TypedListener.java:111) at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:84) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1258) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1282) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1263) at org.eclipse.swt.widgets.Shell.filterProc(Shell.java:748) at org.eclipse.swt.widgets.Display.filterProc(Display.java:1543)

    Read the article

  • Wireless doesn't work on a Lenovo V570

    - by Stephen
    I've had Ubuntu installed on my HD for about 3 months but ever since I ran into this wireless issue I kinda lost my lust of Ubuntu. I have zero experience getting around with/ using the console command. I have a Lenovo V570. I got the driver update for the broadcom networking card via the Additional Drivers application but that did nothing. I love the look and feel of using Ubuntu but I have no technological experience for the matter. Any help would be awesome. When I scan for wireless connections while in Ubuntu, my computer picks up nothing, while on Win7 it will pick up the handful of wireless networks around my area. My wired connection is fine, but the use of not having wireless on a laptop is rather contradictory to it as a feature. Cheers! Also, I just installed 11.10, if that helps any. Yes, I used the search before I posted this, but again I have ZERO understanding of the command stuff and need a meat and potatoes answer(s). stephen@ubuntu:~$ sudo lshw -class network [sudo] password for stephen: *-network UNCLAIMED description: Network controller product: BCM4313 802.11b/g/n Wireless LAN Controller vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:03:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:f1900000-f1903fff *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:04:00.0 logical name: eth0 version: 06 serial: f0:de:f1:63:98:14 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8168e-2.fw ip=192.168.1.78 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:41 ioport:2000(size=256) memory:f1804000-f1804fff memory:f1800000-f1803fff stephen@ubuntu:~$ rfkill list all 0: ideapad_wlan: Wireless LAN Soft blocked: yes Hard blocked: no 1: acer-wireless: Wireless LAN Soft blocked: yes Hard blocked: no

    Read the article

  • RDS installation failure on 2012 R2 Server Core VM in Hyper-V Server

    - by Giles
    I'm currently installing a test-bed for my firms Infrastructure replacement. 10 or so Windows/Linux servers will be replaced by 2 physical servers running Hyper-V server. All services (DC, RDS, SQL) will be on Windows 2012 R2 Server Core VMs, Exchange on Server 2012 R2 GUI, and the rest are things like Elastix, MailArchiver etc, which aren't part of the equation thus far. I have installed Hyper-V server on a test box, and sucessfully got two virtual DC's running, SQL 2014 running, and 8.1 which I use for the RSAT tools. When trying to install RDS (The old fashioned kind, not the newer VDI(?) style), I get a failed installation due to the server not being able to reboot. A couple of articles have said not to do it locally, so I've moved on. Sitting at the Powershell prompt on the Domain Controller or SQL server (Both Server Core), I run the following commands: Import-Module RemoteDesktop New-SessionDeployment -ConnectionBroker "AlstersTS.Alsters.local" -SessionHost "AlstersTS.Alsters.local" The installation begins, carries on for 2 or 3 minutes, then I receive the following error message: New-SessionDeployment : Validation failed for the "RD Connection Broker" parameter. AlstersTS.Alsters.local Unable to connect to the server by using WindowsPowerShell remoting. Verify that you can connect to the server. At line:1 char:1 + NewSessionDeployment -ConnectionBroker "AlstersTS.Alsters.local" -SessionHost " ... + + CategoryInfo : NotSpecified: (:) [Write-Error], WriteErrorException + FullyQualifiedErrorID : Microsoft.PowerShell.Commands.WriteErrorException,New-SessionDeployment So far, I have: Triple, triple checked syntax. Tried various other commands, and a script to accomplish the same task. Checked DNS is functioning as it should. Checked to the best of my knowledge that AD is working as it should. Checked that the Network Service has the needed permissions. Created another VM and placed the two roles on different servers. Deleted all VMs, started again with a new domain name (Lather, rinse, repeat) Performed the whole installation on a second physical box running Hyper-V Server Pleaded with it Interestingly, if I perform the installation via a GUI installation, the thing just works! Now I know I could convert this to a Server Core role after installation, but this wouldn't teach me what was wrong in the first instance. I've probably got 10 pages through various Google searches, each page getting a little less relevant. The closest matches seem to have good information, but it doesn't seem to be the fix for my set-up. As a side note, I expected to be able to "tee" or "out-file" the error message into a text file, but couldn't get that to work either, so I've typed in the error message manually. Chaps, any suggestions, from the glaringly obvious, to the long-winded and complex? Thanks!

    Read the article

  • Eclipse says "Access Denied" when running javaw and how to fix it?

    - by Eduardo de Luna
    I'm trying to get Eclipse to compile and run a HelloWorld class but it can't even do that. I have installed Eclipse x86 SDK 4.2.0 together bit with the latest JRE and JDK both in 64 bit as well. I also have the PATH variables set to respond to command prompts. When I try to run the following code: class HelloWorld { public static void main(String[] args) { System.out.println("Hello World!" ) ; } } And it returns the following error: Exception occurred executing command line. Cannot run program "C:\Program Files\Java\jre7\bin\javaw.exe" (in directory "C:\Users\Default\workspace\devs"): CreateProcess error=5, Access is denied. Can you help me fix this? Thanks!

    Read the article

  • Weird FTP issue between Unity Express and Windows Server 2008 FTP

    - by user33975
    My VOIP specialist complained about not being able to run backups of the Unity Express onto our FTP server (Microsoft FTP on Server 2008). I did a packet trace and observed some weird behavior that I think is even kind of funny in a way. The Unity FTP client is able to initiate both control and data connections with no problem, even being able to LIST directories and CWD into them. But as soon as the client sends a SYST command (not sure why it cares), the server replies with "Windows_NT" and lo and behold...the client immediately sends a QUIT command! I've seen this happen consistently on my packet captures. I tried pointing the Unity FTP client to a FileZilla FTP server, and viola...it worked fine! Has anyone else observed this? I thought it was kinda funny, being that Cisco seems to not like Microsoft that much...

    Read the article

  • Can't install SQL Express 2008R2- caspol.exe application error - the application failed to initialize

    - by Nir
    I'm trying to install SQL Server Express 2008 R2 on Windows 2003 Server (enterprise edition). I get the following error message: Title: caspol.exe - Application Error Text: The application failed to initialize properly (0x000007b), Click on OK to terminate the application. I get the same error message both when downloading the installer and running it and when using the web platform installer. All the pages on the internet I've found about similar problem say it's a corrupt .net installation issue - This server runs multiple .net apps and I've never had any problems with any of them. I've uninstalled and reinstalled .net (causing a painful outage) and nothing changed. Does anyone here has any idea what might cause this? Update 1: additional information I forgot to include: 32bit version of Windows running in a virtual machine, no anti virus Update 2: when running caspol.exe from the command line I get the same error

    Read the article

  • Cant install AMD Catalyst Radeon 12.10 drivers on Ubuntu 12.10 [closed]

    - by Rey Mestidio
    Possible Duplicate: What is the correct way to install ATI Catalyst Video Drivers? I am trying to install the latest AMD Catalyst Radeon 12.10 drivers on Ubuntu 12.10. When I get to the ready to install screen I get an error message. This is the message from the install log file. I'm not sure what to do. Thanks very much for your help in advance! **Supported adapter detected. Check if system has the tools required for installation. fglrx installation requires that the system have kernel headers. /lib/modules/3.5.0-17-generic/build/include/linux/version.h cannot be found on this system. One or more tools required for installation cannot be found on the system. Install the required tools before installing the fglrx driver. Optionally, run the installer with --force option to install without the tools. Forcing install will disable AMD hardware acceleration and may make your system unstable. Not recommended.**

    Read the article

  • Problem with "Transfer-Encoding: chunked" in Apache 2.2

    - by Michal Niklas
    One of client of our web service uses axis2 application that sends HTTP 1.1 query with: Transfer-Encoding: chunked header. Such query is refused by our Apache 2.2 with message: <title>411 Length Required</title> </head><body> <h1>Length Required</h1> <p>A request of the requested method POST requires a valid Content-length.<br /> In Apache logs there is: [Mon May 17 09:06:04 2010] [error] [client 127.0.0.1] chunked Transfer-Encoding forbidden: /app/webservices/soap.hdb When I send such message without Transfer-Encoding: chunked and with Content-Length all works ok. I searched how to solve this problem, but I found only how to disable Transfer-Encoding: chunked on client side. Is there any way to do it on server side?

    Read the article

  • Best choice for a personal "online backup" in Europe

    - by marc_s
    I'm looking for an online backup solution for personal use - besides all the usual requirements (like not too expensive, since it's for personal use), I'd like to add two requirements to it: data center should be in Europe (I don't want my personal data stored in the US, when the next crazed president comes along and wants to confiscate and rifle through everybody's files.....) the online backup store should be accessible through a drive letter in cmd.exe So far, I've looked at a few services, but none have totally convinced me: Dropbox is looking ok, but they insist on creating a silly "My Dropbox" directory in my data path - and there's no way I can choose that name. Sorry - "My everything" is for dummies - I don't like that, I like to name my files and folders according to my liking LiveDrive is OK, too - they offer European storage, drive letter and all - but those drive letters are only available in the Windows Explorer - and not on the cmd.exe command line :-( and since I do 99% of my work on the command line, this is a major drawback..... Any other services I haven't looked at worth checking out? Marc

    Read the article

  • How to tune down the Hyperic built-in postgresql database for a small setup

    - by Svish
    We are testing out Hyperic 4.5.1 in a quite small environment for now. Currently there are just 1-5 agents and there probably won't be any more than 10-15. When I run ps ax there are 20(!) postgres processes running. For a small setup like this, that can't be necessary, can it? I'm a software developer and don't have much experience with setting up servers and such though, so don't really know. Either way, what settings are appropriate for a small Hyperic setup like this? Current, default and untouched configuration file, hqdb/data/postgresql.conf: # ----------------------------- # PostgreSQL configuration file # ----------------------------- # # This file consists of lines of the form: # # name = value # # (The '=' is optional.) White space may be used. Comments are introduced # with '#' anywhere on a line. The complete list of option names and # allowed values can be found in the PostgreSQL documentation. The # commented-out settings shown in this file represent the default values. # # Please note that re-commenting a setting is NOT sufficient to revert it # to the default value, unless you restart the server. # # Any option can also be given as a command line switch to the server, # e.g., 'postgres -c log_connections=on'. Some options can be changed at # run-time with the 'SET' SQL command. # # This file is read on server startup and when the server receives a # SIGHUP. If you edit the file on a running system, you have to SIGHUP the # server for the changes to take effect, or use "pg_ctl reload". Some # settings, which are marked below, require a server shutdown and restart # to take effect. # # Memory units: kB = kilobytes MB = megabytes GB = gigabytes # Time units: ms = milliseconds s = seconds min = minutes h = hours d = days #--------------------------------------------------------------------------- # FILE LOCATIONS #--------------------------------------------------------------------------- # The default values of these variables are driven from the -D command line # switch or PGDATA environment variable, represented here as ConfigDir. #data_directory = 'ConfigDir' # use data in another directory # (change requires restart) #hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file # (change requires restart) #ident_file = 'ConfigDir/pg_ident.conf' # ident configuration file # (change requires restart) # If external_pid_file is not explicitly set, no extra PID file is written. #external_pid_file = '(none)' # write an extra PID file # (change requires restart) #--------------------------------------------------------------------------- # CONNECTIONS AND AUTHENTICATION #--------------------------------------------------------------------------- # - Connection Settings - #listen_addresses = 'localhost' # what IP address(es) to listen on; # comma-separated list of addresses; # defaults to 'localhost', '*' = all # (change requires restart) port = 9432 # (change requires restart) max_connections = 100 # (change requires restart) # Note: increasing max_connections costs ~400 bytes of shared memory per # connection slot, plus lock space (see max_locks_per_transaction). You # might also need to raise shared_buffers to support more connections. #superuser_reserved_connections = 3 # (change requires restart) #unix_socket_directory = '' # (change requires restart) #unix_socket_group = '' # (change requires restart) #unix_socket_permissions = 0777 # octal # (change requires restart) #bonjour_name = '' # defaults to the computer name # (change requires restart) # - Security & Authentication - #authentication_timeout = 1min # 1s-600s #ssl = off # (change requires restart) #password_encryption = on #db_user_namespace = off # Kerberos #krb_server_keyfile = '' # (change requires restart) #krb_srvname = 'postgres' # (change requires restart) #krb_server_hostname = '' # empty string matches any keytab entry # (change requires restart) #krb_caseins_users = off # (change requires restart) # - TCP Keepalives - # see 'man 7 tcp' for details #tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds; # 0 selects the system default #tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds; # 0 selects the system default #tcp_keepalives_count = 0 # TCP_KEEPCNT; # 0 selects the system default #--------------------------------------------------------------------------- # RESOURCE USAGE (except WAL) #--------------------------------------------------------------------------- # - Memory - shared_buffers = 64MB # min 128kB or max_connections*16kB # (change requires restart) #temp_buffers = 8MB # min 800kB #max_prepared_transactions = 5 # can be 0 or more # (change requires restart) # Note: increasing max_prepared_transactions costs ~600 bytes of shared memory # per transaction slot, plus lock space (see max_locks_per_transaction). work_mem = 2MB # min 64kB maintenance_work_mem = 32MB # min 1MB #max_stack_depth = 2MB # min 100kB # - Free Space Map - max_fsm_pages = 204800 # min max_fsm_relations*16, 6 bytes each # (change requires restart) #max_fsm_relations = 1000 # min 100, ~70 bytes each # (change requires restart) # - Kernel Resource Usage - #max_files_per_process = 1000 # min 25 # (change requires restart) #shared_preload_libraries = '' # (change requires restart) # - Cost-Based Vacuum Delay - #vacuum_cost_delay = 0 # 0-1000 milliseconds #vacuum_cost_page_hit = 1 # 0-10000 credits #vacuum_cost_page_miss = 10 # 0-10000 credits #vacuum_cost_page_dirty = 20 # 0-10000 credits #vacuum_cost_limit = 200 # 0-10000 credits # - Background writer - #bgwriter_delay = 200ms # 10-10000ms between rounds #bgwriter_lru_percent = 1.0 # 0-100% of LRU buffers scanned/round #bgwriter_lru_maxpages = 5 # 0-1000 buffers max written/round #bgwriter_all_percent = 0.333 # 0-100% of all buffers scanned/round #bgwriter_all_maxpages = 5 # 0-1000 buffers max written/round #--------------------------------------------------------------------------- # WRITE AHEAD LOG #--------------------------------------------------------------------------- # - Settings - fsync = on # turns forced synchronization on or off #wal_sync_method = fsync # the default is the first option # supported by the operating system: # open_datasync # fdatasync # fsync # fsync_writethrough # open_sync #full_page_writes = on # recover from partial page writes #wal_buffers = 64kB # min 32kB # (change requires restart) commit_delay = 100000 # range 0-100000, in microseconds #commit_siblings = 5 # range 1-1000 # - Checkpoints - checkpoint_segments = 10 # in logfile segments, min 1, 16MB each #checkpoint_timeout = 5min # range 30s-1h #checkpoint_warning = 30s # 0 is off # - Archiving - #archive_command = '' # command to use to archive a logfile segment #archive_timeout = 0 # force a logfile segment switch after this # many seconds; 0 is off #--------------------------------------------------------------------------- # QUERY TUNING #--------------------------------------------------------------------------- # - Planner Method Configuration - #enable_bitmapscan = on #enable_hashagg = on #enable_hashjoin = on #enable_indexscan = on #enable_mergejoin = on #enable_nestloop = on #enable_seqscan = on #enable_sort = on #enable_tidscan = on # - Planner Cost Constants - #seq_page_cost = 1.0 # measured on an arbitrary scale #random_page_cost = 4.0 # same scale as above #cpu_tuple_cost = 0.01 # same scale as above #cpu_index_tuple_cost = 0.005 # same scale as above #cpu_operator_cost = 0.0025 # same scale as above #effective_cache_size = 128MB # - Genetic Query Optimizer - #geqo = on #geqo_threshold = 12 #geqo_effort = 5 # range 1-10 #geqo_pool_size = 0 # selects default based on effort #geqo_generations = 0 # selects default based on effort #geqo_selection_bias = 2.0 # range 1.5-2.0 # - Other Planner Options - #default_statistics_target = 10 # range 1-1000 #constraint_exclusion = off #from_collapse_limit = 8 #join_collapse_limit = 8 # 1 disables collapsing of explicit # JOINs #--------------------------------------------------------------------------- # ERROR REPORTING AND LOGGING #--------------------------------------------------------------------------- # - Where to Log - log_destination = 'stderr' # Valid values are combinations of # stderr, syslog and eventlog, # depending on platform. # This is used when logging to stderr: redirect_stderr = on # Enable capturing of stderr into log # files # (change requires restart) # These are only used if redirect_stderr is on: log_directory = '../../logs' # Directory where log files are written # Can be absolute or relative to PGDATA log_filename = 'hqdb-%Y-%m-%d.log' # Log file name pattern. # Can include strftime() escapes #log_truncate_on_rotation = off # If on, any existing log file of the same # name as the new log file will be # truncated rather than appended to. But # such truncation only occurs on # time-driven rotation, not on restarts # or size-driven rotation. Default is # off, meaning append to existing files # in all cases. log_rotation_age = 1d # Automatic rotation of logfiles will # happen after that time. 0 to # disable. #log_rotation_size = 10MB # Automatic rotation of logfiles will # happen after that much log # output. 0 to disable. # These are relevant when logging to syslog: #syslog_facility = 'LOCAL0' #syslog_ident = 'postgres' # - When to Log - #client_min_messages = notice # Values, in order of decreasing detail: # debug5 # debug4 # debug3 # debug2 # debug1 # log # notice # warning # error #log_min_messages = notice # Values, in order of decreasing detail: # debug5 # debug4 # debug3 # debug2 # debug1 # info # notice # warning # error # log # fatal # panic #log_error_verbosity = default # terse, default, or verbose messages #log_min_error_statement = error # Values in order of increasing severity: # debug5 # debug4 # debug3 # debug2 # debug1 # info # notice # warning # error # fatal # panic (effectively off) log_min_duration_statement = 10000 # -1 is disabled, 0 logs all statements # and their durations. #silent_mode = off # DO NOT USE without syslog or # redirect_stderr # (change requires restart) # - What to Log - #debug_print_parse = off #debug_print_rewritten = off #debug_print_plan = off #debug_pretty_print = off #log_connections = off #log_disconnections = off #log_duration = off #log_line_prefix = '' # Special values: # %u = user name # %d = database name # %r = remote host and port # %h = remote host # %p = PID # %t = timestamp (no milliseconds) # %m = timestamp with milliseconds # %i = command tag # %c = session id # %l = session line number # %s = session start timestamp # %x = transaction id # %q = stop here in non-session # processes # %% = '%' # e.g. '<%u%%%d> ' #log_statement = 'none' # none, ddl, mod, all #log_hostname = off #--------------------------------------------------------------------------- # RUNTIME STATISTICS #--------------------------------------------------------------------------- # - Query/Index Statistics Collector - #stats_command_string = on #update_process_title = on stats_start_collector = on # needed for block or row stats # (change requires restart) stats_block_level = on stats_row_level = on stats_reset_on_server_start = off # (change requires restart) # - Statistics Monitoring - #log_parser_stats = off #log_planner_stats = off #log_executor_stats = off #log_statement_stats = off #--------------------------------------------------------------------------- # AUTOVACUUM PARAMETERS #--------------------------------------------------------------------------- #autovacuum = off # enable autovacuum subprocess? # 'on' requires stats_start_collector # and stats_row_level to also be on #autovacuum_naptime = 1min # time between autovacuum runs #autovacuum_vacuum_threshold = 500 # min # of tuple updates before # vacuum #autovacuum_analyze_threshold = 250 # min # of tuple updates before # analyze #autovacuum_vacuum_scale_factor = 0.2 # fraction of rel size before # vacuum #autovacuum_analyze_scale_factor = 0.1 # fraction of rel size before # analyze #autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum # (change requires restart) #autovacuum_vacuum_cost_delay = -1 # default vacuum cost delay for # autovacuum, -1 means use # vacuum_cost_delay #autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for # autovacuum, -1 means use # vacuum_cost_limit #--------------------------------------------------------------------------- # CLIENT CONNECTION DEFAULTS #--------------------------------------------------------------------------- # - Statement Behavior - #search_path = '"$user",public' # schema names #default_tablespace = '' # a tablespace name, '' uses # the default #check_function_bodies = on #default_transaction_isolation = 'read committed' #default_transaction_read_only = off #statement_timeout = 0 # 0 is disabled #vacuum_freeze_min_age = 100000000 # - Locale and Formatting - datestyle = 'iso, mdy' #timezone = unknown # actually, defaults to TZ # environment setting #timezone_abbreviations = 'Default' # select the set of available timezone # abbreviations. Currently, there are # Default # Australia # India # However you can also create your own # file in share/timezonesets/. #extra_float_digits = 0 # min -15, max 2 #client_encoding = sql_ascii # actually, defaults to database # encoding # These settings are initialized by initdb -- they might be changed lc_messages = 'C' # locale for system error message # strings lc_monetary = 'C' # locale for monetary formatting lc_numeric = 'C' # locale for number formatting lc_time = 'C' # locale for time formatting # - Other Defaults - #explain_pretty_print = on #dynamic_library_path = '$libdir' #local_preload_libraries = '' #--------------------------------------------------------------------------- # LOCK MANAGEMENT #--------------------------------------------------------------------------- #deadlock_timeout = 1s #max_locks_per_transaction = 64 # min 10 # (change requires restart) # Note: each lock table slot uses ~270 bytes of shared memory, and there are # max_locks_per_transaction * (max_connections + max_prepared_transactions) # lock table slots. #--------------------------------------------------------------------------- # VERSION/PLATFORM COMPATIBILITY #--------------------------------------------------------------------------- # - Previous Postgres Versions - #add_missing_from = off #array_nulls = on #backslash_quote = safe_encoding # on, off, or safe_encoding #default_with_oids = off #escape_string_warning = on #standard_conforming_strings = off #regex_flavor = advanced # advanced, extended, or basic #sql_inheritance = on # - Other Platforms & Clients - #transform_null_equals = off #--------------------------------------------------------------------------- # CUSTOMIZED OPTIONS #--------------------------------------------------------------------------- #custom_variable_classes = '' # list of custom variable class names SELECT * FROM pg_stat_activity; datid | datname | procpid | usesysid | usename | current_query | waiting | query_start | backend_start | client_addr | client_port -------+---------+---------+----------+---------+---------------------------------+---------+-------------------------------+-------------------------------+-------------+------------- 16384 | hqdb | 3267 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.036781+01 | 2011-02-08 15:51:20.02413+01 | 127.0.0.1 | 47892 16384 | hqdb | 3268 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.050994+01 | 2011-02-08 15:51:20.047393+01 | 127.0.0.1 | 47893 16384 | hqdb | 3269 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.056661+01 | 2011-02-08 15:51:20.053201+01 | 127.0.0.1 | 47894 16384 | hqdb | 3271 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.062351+01 | 2011-02-08 15:51:20.058822+01 | 127.0.0.1 | 47895 16384 | hqdb | 3272 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.068328+01 | 2011-02-08 15:51:20.064517+01 | 127.0.0.1 | 47896 16384 | hqdb | 3273 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.07444+01 | 2011-02-08 15:51:20.070755+01 | 127.0.0.1 | 47897 16384 | hqdb | 3274 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.080941+01 | 2011-02-08 15:51:20.076983+01 | 127.0.0.1 | 47898 16384 | hqdb | 3275 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.08741+01 | 2011-02-08 15:51:20.083697+01 | 127.0.0.1 | 47899 16384 | hqdb | 3276 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.093597+01 | 2011-02-08 15:51:20.089977+01 | 127.0.0.1 | 47900 16384 | hqdb | 3277 | 10 | hqadmin | <IDLE> in transaction | f | 2011-02-08 15:51:20.133974+01 | 2011-02-08 15:51:20.096149+01 | 127.0.0.1 | 47901 16384 | hqdb | 3308 | 10 | hqadmin | <IDLE> | f | 2011-02-09 10:49:27.402197+01 | 2011-02-08 15:51:29.826321+01 | 127.0.0.1 | 47902 16384 | hqdb | 3309 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.572395+01 | 2011-02-08 15:51:29.865243+01 | 127.0.0.1 | 47903 16384 | hqdb | 3310 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.586273+01 | 2011-02-08 15:51:29.874346+01 | 127.0.0.1 | 47904 16384 | hqdb | 3311 | 10 | hqadmin | <IDLE> | f | 2011-02-09 10:10:03.024088+01 | 2011-02-08 15:51:29.883598+01 | 127.0.0.1 | 47905 16384 | hqdb | 3312 | 10 | hqadmin | <IDLE> in transaction | f | 2011-02-08 15:51:35.804457+01 | 2011-02-08 15:51:29.892925+01 | 127.0.0.1 | 47906 16384 | hqdb | 3418 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.580207+01 | 2011-02-08 15:51:55.56911+01 | 127.0.0.1 | 47910 16384 | hqdb | 3419 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.59781+01 | 2011-02-08 15:51:55.588609+01 | 127.0.0.1 | 47911 16384 | hqdb | 3422 | 10 | hqadmin | <IDLE> | f | 2011-02-09 10:10:02.668836+01 | 2011-02-08 15:51:55.603076+01 | 127.0.0.1 | 47914 16384 | hqdb | 3421 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.770427+01 | 2011-02-08 15:51:55.603086+01 | 127.0.0.1 | 47913 16384 | hqdb | 3420 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.680785+01 | 2011-02-08 15:51:55.637058+01 | 127.0.0.1 | 47912 16384 | hqdb | 18233 | 10 | hqadmin | SELECT * FROM pg_stat_activity; | f | 2011-02-09 10:49:29.688949+01 | 2011-02-09 10:48:13.031475+01 | | -1 (21 rows)

    Read the article

  • Amazon EC2 EBS volume scheduled backup/snapshots using puppet

    - by Ehrann Mehdan
    I am not a Linux admin, although I wish I was, and I have seen these questions Amazon EC2 Backup Strategy Amazon EC2 + EBS:: Regular backup plan? Simple Backup Strategy for Amazon EC2 instances / volumes? And this suggestion http://alestic.com/2009/09/ec2-consistent-snapshot I tried using command line + crontab (the command line works, but crontab for some reason, doesn't) But I'm still pretty lost, all I want is an automated, rolling backup of my amazon EC2 (EBS) data (by rolling I mean keep 3-4 weeks back, but delete old snapshots as new ones come for cost control) And as things usually go, if there is something that is hard and painful, someone creates a solution for it. My question is simple, is there a way using a tool like Puppet to do it without a painful learning curve? (or via other tools like http://ylastic.com) If yes, how?

    Read the article

  • Debugging JuniperSetupClientInstaller.exe Problems

    - by Damon
    I recently moved from Windows 7 to Windows 2008 server so I can run SharePoint on my physical machine and not through a VPC, so I've been trying to get everything re-installed on my system.  As part of that process, I tried re-establishing a connection back to one of client's corporate networks and their system prompted me to run JuniperSetupClientInstaller.exe.  Normally this runs, finishes, and you can connect to the VPN no problem.  This time, however, it failed.  Unfortunately, there were no error messages to let me know why - it just didn't work. I've had success running application in "compatability mode" so I gave that a shot - same problem.  But during the installation I noticed that JuniperSetupClientInstaller.exe unpacks a number of files into a directory (you can see the exact location in the details of the installer) and then runs a DIFFERENT application - JuniperSetupClient.exe.  If you navigate to that directory, you will see a text file named JuniperSetupClient.log that contains information about the setup process. In my case, I installed a SharePoint site on Port 3333 - which the Juniper software needs to communicate with the VPN.  There was a nice message in the log file saying the VPN software could not bind to port 3333 which quickly alerted me to the issue, and moving the site off that port number fixed the issue.  However, it would have been nice to had an error message of sorts because I spent a chunk of time futilely researching compatibility issues. 

    Read the article

  • First round playing with Memcached

    - by Shaun
    To be honest I have not been very interested in the caching before I’m going to a project which would be using the multi-site deployment and high connection and concurrency and very sensitive to the user experience. That means we must cache the output data for better performance. After looked for the Internet I finally focused on the Memcached. What’s the Memcached? I think the description on its main site gives us a very good and simple explanation. Free & open source, high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load. Memcached is an in-memory key-value store for small chunks of arbitrary data (strings, objects) from results of database calls, API calls, or page rendering. Memcached is simple yet powerful. Its simple design promotes quick deployment, ease of development, and solves many problems facing large data caches. Its API is available for most popular languages. The original Memcached was built on *nix system are is being widely used in the PHP world. Although it’s not a problem to use the Memcached installed on *nix system there are some windows version available fortunately. Since we are WISC (Windows – IIS – SQL Server – C#, which on the opposite of LAMP) it would be much easier for us to use the Memcached on Windows rather than *nix. I’m using the Memcached Win X64 version provided by NorthScale. There are also the x86 version and other operation system version.   Install Memcached Unpack the Memcached file to a folder on the machine you want it to be installed, we can see that there are only 3 files and the main file should be the “memcached.exe”. Memcached would be run on the server as a service. To install the service just open a command windows and navigate to the folder which contains the “memcached.exe”, let’s say “C:\Memcached\”, and then type “memcached.exe -d install”. If you are using Windows Vista and Windows 7 system please be execute the command through the administrator role. Right-click the command item in the start menu and use “Run as Administrator”, otherwise the Memcached would not be able to be installed successfully. Once installed successful we can type “memcached.exe -d start” to launch the service. Now it’s ready to be used. The default port of Memcached is 11211 but you can change it through the command argument. You can find the help by typing “memcached -h”.   Using Memcached Memcahed has many good and ready-to-use providers for vary program language. After compared and reviewed I chose the Memcached Providers. It’s built based on another 3rd party Memcached client named enyim.com Memcached Client. The Memcached Providers is very simple to set/get the cached objects through the Memcached servers and easy to be configured through the application configuration file (aka web.config and app.config). Let’s create a console application for the demonstration and add the 3 DLL files from the package of the Memcached Providers to the project reference. Then we need to add the configuration for the Memcached server. Create an App.config file and firstly add the section on top of it. Here we need three sections: the section for Memcached Providers, for enyim.com Memcached client and the log4net. 1: <configSections> 2: <section name="cacheProvider" 3: type="MemcachedProviders.Cache.CacheProviderSection, MemcachedProviders" 4: allowDefinition="MachineToApplication" 5: restartOnExternalChanges="true"/> 6: <sectionGroup name="enyim.com"> 7: <section name="memcached" 8: type="Enyim.Caching.Configuration.MemcachedClientSection, Enyim.Caching"/> 9: </sectionGroup> 10: <section name="log4net" 11: type="log4net.Config.Log4NetConfigurationSectionHandler,log4net"/> 12: </configSections> Then we will add the configuration for 3 of them in the App.config file. The Memcached server information would be defined under the enyim.com section since it will be responsible for connect to the Memcached server. Assuming I installed the Memcached on two servers with the default port, the configuration would be like this. 1: <enyim.com> 2: <memcached> 3: <servers> 4: <!-- put your own server(s) here--> 5: <add address="192.168.0.149" port="11211"/> 6: <add address="10.10.20.67" port="11211"/> 7: </servers> 8: <socketPool minPoolSize="10" maxPoolSize="100" connectionTimeout="00:00:10" deadTimeout="00:02:00"/> 9: </memcached> 10: </enyim.com> Memcached supports the multi-deployment which means you can install the Memcached on the servers as many as you need. The protocol of the Memcached responsible for routing the cached objects into the proper server. So it’s very easy to scale-out your system by Memcached. And then define the Memcached Providers configuration. The defaultExpireTime indicates how long the objected cached in the Memcached would be expired, the default value is 2000 ms. 1: <cacheProvider defaultProvider="MemcachedCacheProvider"> 2: <providers> 3: <add name="MemcachedCacheProvider" 4: type="MemcachedProviders.Cache.MemcachedCacheProvider, MemcachedProviders" 5: keySuffix="_MySuffix_" 6: defaultExpireTime="2000"/> 7: </providers> 8: </cacheProvider> The last configuration would be the log4net. 1: <log4net> 2: <!-- Define some output appenders --> 3: <appender name="ConsoleAppender" type="log4net.Appender.ConsoleAppender"> 4: <layout type="log4net.Layout.PatternLayout"> 5: <conversionPattern value="%date [%thread] %-5level %logger [%property{NDC}] - %message%newline"/> 6: </layout> 7: </appender> 8: <!--<threshold value="OFF" />--> 9: <!-- Setup the root category, add the appenders and set the default priority --> 10: <root> 11: <priority value="WARN"/> 12: <appender-ref ref="ConsoleAppender"> 13: <filter type="log4net.Filter.LevelRangeFilter"> 14: <levelMin value="WARN"/> 15: <levelMax value="FATAL"/> 16: </filter> 17: </appender-ref> 18: </root> 19: </log4net>   Get, Set and Remove the Cached Objects Once we finished the configuration it would be very simple to consume the Memcached servers. The Memcached Providers gives us a static class named DistCache that can be used to operate the Memcached servers. Get<T>: Retrieve the cached object from the Memcached servers. If failed it will return null or the default value. Add: Add an object with a unique key into the Memcached servers. Assuming that we have an operation that retrieve the email from the name which is time consuming. This is the operation that should be cached. The method would be like this. I utilized Thread.Sleep to simulate the long-time operation. 1: static string GetEmailByNameSlowly(string name) 2: { 3: Thread.Sleep(2000); 4: return name + "@ethos.com.cn"; 5: } Then in the real retrieving method we will firstly check whether the name, email information had been searched previously and cached. If yes we will just return them from the Memcached, otherwise we will invoke the slowly method to retrieve it and then cached. 1: static string GetEmailByName(string name) 2: { 3: var email = DistCache.Get<string>(name); 4: if (string.IsNullOrEmpty(email)) 5: { 6: Console.WriteLine("==> The name/email not be in memcached so need slow loading. (name = {0})==>", name); 7: email = GetEmailByNameSlowly(name); 8: DistCache.Add(name, email); 9: } 10: else 11: { 12: Console.WriteLine("==> The name/email had been in memcached. (name = {0})==>", name); 13: } 14: return email; 15: } Finally let’s finished the calling method and execute. 1: static void Main(string[] args) 2: { 3: var name = string.Empty; 4: while (name != "q") 5: { 6: Console.Write("==> Please enter the name to find the email: "); 7: name = Console.ReadLine(); 8:  9: var email = GetEmailByName(name); 10: Console.WriteLine("==> The email of {0} is {1}.", name, email); 11: } 12: } The first time I entered “ziyanxu” it takes about 2 seconds to get the email since there’s nothing cached. But the next time I entered “ziyanxu” it returned very quickly from the Memcached.   Summary In this post I explained a bit on why we need cache, what’s Memcached and how to use it through the C# application. The example is fairly simple but hopefully demonstrated on how to use it. Memcached is very easy and simple to be used since it gives you the full opportunity to consider what, when and how to cache the objects. And when using Memcached you don’t need to consider the cache servers. The Memcached would be like a huge object pool in front of you. The next step I’m thinking now are: What kind of data should be cached? And how to determined the key? How to implement the cache as a layer on top of the business layer so that the application will not notice that the cache is there. How to implement the cache by AOP so that the business logic no need to consider the cache. I will investigate on them in the future and will share my thoughts and results.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Has anyone else had problems with the miserable kb980408 update?

    - by Dan
    I wasn't experienced any problems before installing kb980408 for Windows 7 x86. The update made it impossible for me to log into my computer after displaying a "Setting up personalized settings for Windows Desktop Update" message, after killing that task a "Windows Explorer is not responding - Windows is checking for a solution to the problem" message appears, then it waits forever... If I killed that task, I was not able to continue the boot process, if I ran explore.exe manually, the same thing happens. The solution, by the way, was to boot into safe mode and use the add/remove program in the control panel to uninstall this wretched piece of code. Start-up repair and/or the use of system restore will not help you, you need to uninstall.

    Read the article

  • run sfc /scannow as administrator, but I am administrator

    - by Luigi
    On my windows 2003 I have to run sfc /scannow as admin. I have tried to run it as local administrator and domain administrator, but it says I need of administrator privilege ???? I have tried runas /user:administrator cmd and then on a new shell sfc /scannow. But it does not work too. The error message is: You must be an administrator running a console session in order to use the Windows File Checker utility. the error message is in italian and should be translated in english as above. The following is the cmd's screenshot of the error. I am connected as domain administrator but I run it a runas to be local admin.

    Read the article

< Previous Page | 428 429 430 431 432 433 434 435 436 437 438 439  | Next Page >