Search Results

Search found 9128 results on 366 pages for 'execute immediate'.

Page 271/366 | < Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >

  • Setting up xpra for client use in OS X

    - by Jonathan
    I've been trying to get xpra to run on OS X for the last few days to connect to my Ubuntu server. Note that there's a GUI for it called shifter, but that (at least on OS X) is still far too buggy. For those who don't know what xpra is, if you know what screen is, it's like screen for GUI X Windows apps tunneled over ssh. You can render a remote X app locally so it's faster than sending a series of compresses screen shots (like VNC), but with xpra you can disconnect and reconnect on different computers. To get the basic functionality you can just type "ssh -X server.location" and any GUI app you open from the command line will open locally. I've been able to get xpra to build by doing the following: Download pari-all-0.0.6.tar.gz from the xpra site listed under upstream and untar it. Issue the following Mac Ports command (Dependencies thanks to RogBlog): sudo port install python25 python26 py26-pyrex py26-gtk xorg-libXtst py25-gobject py25-gtk py25-nose py26-nose xorg-libXdamage xorg-libXcomposite xorg-libXtst xorg-libXfixes In the upstream list of v0.0.06 patches (NOT 0.0.8pre!) on the xpra site listed above, download mswindows-conditional-pyrex.patch. Open the patch with your favorite text editor and change the single occurrence of "win" in it to "darwin". Apply the patch to setup.py. Run do-build in the command line. Now where I'm stumped: how do I run xpra? The build produces a sub directory called install/bin in which xpra is located, but when I try to run it I get the following error: Traceback (most recent call last): File "./xpra", line 4, in import xpra.scripts.main ImportError: No module named xpra.scripts.main There is a file called main.py under xpra/scripts, but I don't know any python and I'm not sure if this is what it's looking for, and what to do with it even if it is. My goal is to set up xpra so I can install it into /usr/bin (or some other common path for executables) and execute it whenever I please. What do I do next?

    Read the article

  • When ran as a scheduled task, cannot save an Excel workbook when using Excel.Application COM object in PowerShell

    - by Daniel Richnak
    I'm having an issue where I've automated creating an Excel.Application COM object, add some data into a workbook, and then saving the document as an xlsx. This works fine if: I'm already in Powershell interactive host and either run each command in sequence, or execute as a ps1. I run it from cmd.exe, using the syntax: powershell.exe -command "c:\path\to\powershellscript.ps1" I create a scheduled task in Windows 7 / Server 2008 R2, use the above powershell.exe -command syntax, and use the mode "Run only when the user is logged on". It fails when I modify the same scheduled task, but set it to "run whether the user is logged on or not". Here's a sample script that illustrates the problem I'm having: $Excel = New-Object -Com Excel.Application $Excelworkbook = $Excel.Workbooks.Add() $excelworkbook.saveas("C:\temp\test.xlsx") $excelworkbook.close() I have a theory that the COM object fails somehow if my profile isn't loaded / if it's not performed in a command window. Any ideas on which options to choose when creating the scheduled task, or which options to use when creating the Excel object or using the SaveAs() function? Can anybody reproduce this? I've been able to see this behavior on both a Server 2008 R2 machine, and Windows 7. Haven't tried other platforms.

    Read the article

  • Method to integrate Powershell scripts with non-Windows workflow?

    - by Matt Simmons
    I love the smell of new machines in the morning. I'm automating a machine creation workflow that involves several separate systems across my infrastructure, some of which involve 15 year old perl scripts on Solaris hosts, PXE Booting Linux systems, and Powershell on Windows Server 2008. I can script each of the individual parts, and integrating the Linux and Unix automation is fairly straightforward, but I'm at a loss as to how to reliably tie together the Powershell scripts to the rest of the processes. I would prefer if the process began on a Linux host, since I imagine that it will end up as a web application living on an Apache server, but if it needs to begin on Windows, I am hesitantly okay with that. I would ideally like something along the lines of psexec for Linux to run against Windows, but the answer in that direction appears to by Cygwin, and as much as I appreciate all of the hard work that they put in, it has never felt right, if you know what I mean. It's great for a desktop and gives a lot of functionality, but I feel like Windows servers should be treated like Windows servers and not bastardized Unix machines (which, incidentally, is my argument against OSX servers, too, and they're actually Unix). Anyway, I don't want to go with Cygwin unless that's the last and only option. So I guess what I'm asking is if there is a way to execute jobs on Windows machines from Linux. Without Cygwin. I'm open to ideas and suggestions, including "Look idiot, everyone uses Cygwin, so suck it up and deal with it". Thanks in advance!

    Read the article

  • Laptop authentication/logon via accelometer tilt, flip, and twist

    - by wonsungi
    Looking for another application/technology: A number of years ago, I read about a novel way to authenticate and log on to a laptop. The user simply had to hold the laptop in the air and execute a simple series of tilts and flips to the laptop. By logging accelerometer data, this creates a unique signature for the user. Even if an attacker watched and repeated the exact same motions, the attacker could not replicate the user's movements closely enough. I am looking for information about this technology again, but I can't find anything. It may have been an actual feature on a laptop, or it may have just been a research project. I think I read about it in a magazine like Wired. Does anyone have more information about authentication via unique accelerometer signatures? Here are the closest articles I have been able to find: Knock-based commands for your Linux laptop Shake Well Before Use: Authentication Based on Accelerometer Data[PDF] Inferring Identity using Accelerometers in Television Remote Controls User Evaluation of Lightweight User Authentication with a Single Tri-Axis Accelerometer Identifying Users of Portable Devices from Gait Pattern with Accelerometers[PDF] 3D Signature Biometrics Using Curvature Moments[PDF] MoViSign: A novel authentication mechanism using mobile virtual signatures

    Read the article

  • PHP / SSH2 Multi-threading

    - by Asad Moeen
    I'm basically done using SSH2 with PHP. Some may already that while using it, the PHP code actually waits for all the listed commands to be executed in SSH and when everything is done, it then gives back the results. Where that is fine for the work I am doing, but I need some commands to be multi-threaded. $cmd= MyCommand; echo $ssh-exec($cmd); So I just want this to run in parallel 2 times. I googled some stuff but didn't get along with it. For a basic thing, I came across to this way posted by someone but it didn't work out for me. for ($i = 0; $i < 2; $i += 1) { exec("php test_1.php $i > test.txt &"); //this will execute test_1.php and will leave this process executing in the background and will go to next iteration of the loop immediately without waiting the completion of the script in the test_1.php , $i is passed as argument . } I tried to put it this way exec("echo $ssh-exec($cmd) $i test.txt &"); in the loop but either it never entered the loop or the echo $ssh-exec failed. I don't really need a very neat multi-threading. Even a single second delay would do good, thank you.

    Read the article

  • Orphaned SQL Recordsets/Connections with IIS

    - by Damian
    I have an IIS 6 site running on Windows 2003 Server x86 with MS SQL2005 Enterprise edition running ASP Classic (no choice). The site runs very fast with about 8000 page views per hour. All of my SQL tables are indexed and I have used the profiler to check my queries, the slowest of which is only about 10-15ms. I have autoshrink disabled, autogrow is set to 250mb, database is 2gb with 800mb of free space. My problem is that every now and then the site will slow to a crawl for no reason. Pages that just have a simple 'connect to databse and increment a hit counter' work ok, but more SQL intensive pages that normally execute in about 60ms take 25,000ms to run. This happens for about 30 seconds and then goes away. I was having an issue with orphan recordsets and connections due to the way I was releasing them. I have fixed this up and the issue is much better, but I am still getting them. Is there a way with permon, etc. to track when SQL Server or Windows closes these Orphan connections? At least if I can monitor the issue I will know if I am making progress or if I am even looking at the right things. Is there anything else I might be missing? Thank you!

    Read the article

  • CC.NET + SVN : Server certificate issue

    - by MSI
    I am trying to setup Continuous Integration in our office. Being a puny little developer I am facing this supposedly infamous problem: " Source control operation failed: svn: OPTIONS of 'https://trunkURL': Server certificate verification failed: issuer is not trusted" So I tried the following solution - Run CC.NET service (server running as win service) using a domain account (rather than default LOCAL SYSTEMS) and accept cert permanently using command prompt under that user by using svn log/list on the repo. Doesn't help :(. I am getting the following from my artifact/log files(or dashboard) ThoughtWorks.CruiseControl.Core.CruiseControlException: Source control operation failed: svn: OPTIONS of 'https://TrunkURL': Server certificate verification failed: issuer is not trusted (https://ServerAdd) . Process command: E:\(svn.exe Path) log https://TrunkURL -r "{2010-11-08T02:12:20Z}:{2010-11-08T02:13:21Z}" --verbose --xml --no-auth-cache --non-interactive at ThoughtWorks.CruiseControl.Core.Sourcecontrol.ProcessSourceControl.Execute(ProcessInfo processInfo) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Svn.GetModifications(IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.QuietPeriod.GetModificationsWithLogging(ISourceControl sc, IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.QuietPeriod.GetModifications(ISourceControl sourceControl, IIntegrationResult lastBuild, IIntegrationResult thisBuild) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.GetModifications(IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) We are using VisualSVN Server and CC.NET for this adventure. Tips, suggestions will be highly appreciated. Thanks

    Read the article

  • How can I enable IIS to run Perl scripts?

    - by eidylon
    we're trying to get awstats up and running on our IIS6 server. awstats is running fine and generating output and all that jazz... no problem there. When trying to change the selected month/year in the output page though, it is trying to run awstats.pl through IIS, and coming up with a 404 error. To debug I made a simple hello.pl in my root, and tried to run that, also 404s. I followed the directions on this page http://support.microsoft.com/kb/245225 regarding installing ActiveState Perl and then configuring IIS. I added the extension mapping on my directory and registered the web services extension as directed. The perl scripts all run fine and output if run from the command line, so I know perl is good, but I can't get IIS to find the files. Here is the configuration on my the home directory tab of my site: Here is the configuration of my web service extension: I turned on directory browsing for this site, and when i get the listing of the directory, IIS actually does show the .pl files being in the directory. But if I click on one of them, I get the 404 error. 12/17 15:22 Also tried adding .pl as a mime-type on my site's configuration. This did not help. 12/17 16:57 Also tried Everyone Read/Execute permissions on both the Perl direcory and the directory housing awstats. This did not help.

    Read the article

  • Allowing non-admin users to unstick the print spooler

    - by Reafidy
    I currently have an issue where the print que is getting stuck on a central print server (windows server 2008). Using the "Clear all documents" function does not clear it and gets stuck too. I need non-admin users to be able to clear the print cue from there work stations. I have tried using the following winforms program which I created and allows a user to stop the print spooler, delete printer files in the "C:\Windows\System32\spool\PRINTERS folder" and then start the print spooler but this functionality requires the program to be runs as an administrator, how can I allow my normal users to execute this program without giving them admin privileges? Or is there another way I can allow normal user to clear the print que on the server? Imports System.ServiceProcess Public Class Form1 Private Sub Button1_Click(sender As System.Object, e As System.EventArgs) Handles Button1.Click ClearJammedPrinter() End Sub Public Sub ClearJammedPrinter() Dim tspTimeOut As TimeSpan = New TimeSpan(0, 0, 5) Dim controllerStatus As ServiceControllerStatus = ServiceController1.Status Try If ServiceController1.Status <> ServiceProcess.ServiceControllerStatus.Stopped Then ServiceController1.Stop() End If Try ServiceController1.WaitForStatus(ServiceProcess.ServiceControllerStatus.Stopped, tspTimeOut) Catch Throw New Exception("The controller could not be stopped") End Try Dim strSpoolerFolder As String = "C:\Windows\System32\spool\PRINTERS" Dim s As String For Each s In System.IO.Directory.GetFiles(strSpoolerFolder) System.IO.File.Delete(s) Next s Catch ex As Exception MsgBox(ex.Message) Finally Try Select Case controllerStatus Case ServiceControllerStatus.Running If ServiceController1.Status <> ServiceControllerStatus.Running Then ServiceController1.Start() Case ServiceControllerStatus.Stopped If ServiceController1.Status <> ServiceControllerStatus.Stopped Then ServiceController1.Stop() End Select ServiceController1.WaitForStatus(controllerStatus, tspTimeOut) Catch MsgBox(String.Format("{0}{1}", "The print spooler service could not be returned to its original setting and is currently: ", ServiceController1.Status)) End Try End Try End Sub End Class

    Read the article

  • VirtualBox management interface unreliability

    - by Arlen Cuss
    I'm using VirtualBox 3.2.8_OSE with 20 VMs running, and everything's going fine. I find that if I hammer the VBoxManage interface, all sorts of interesting things happen, usually necessitating either a restart of the VM in question, or of all VMs. For instance, if I use VBoxManage guestcontrol execute to run processes, after a few hours of using it maybe once or twice a minute on any given VM, it'll mysteriously start reporting VERR_NOT_IMPLEMENTED and refusing to do anything—sometimes trying to restart /usr/sbin/VBoxService on the VM itself will get it back in working order, but often it won't, and in the meantime, no data can be collected using VBoxManage. Such data includes the VM's IP, so if I hadn't recorded it earlier, I'm usually in trouble and have no option but to portscan the network for it, or kill the VM's process on the host manually and restart it. This one I haven't narrowed down yet, but it seems that even using VBoxManage guestproperty get (to retrieve a machine's IP) frequently and rapidly is enough to cause all VMs' management interfaces to die. The processes are still running fine, but VBoxManage reports them all as "powered off". In the meantime, another process somewhere in the system seems to have decided that their being powered off means they need to be powered on again, and suddenly I have 2x the number of VBoxHeadless processes running than I used to. Has anyone else seen behaviour like this? Is there any workaround? This is a serious impediment to my work, as I've had to resort to a lot of (hacky) caching of data and rate-limiting how often I call VBoxManage, just in case I accidentally bring 20 VMs to their knees.

    Read the article

  • Synergy: Cannot send media keys from Linux to Mac

    - by CraftyThumber
    I have a Linux Synergy server (Si-Linux) serving just one Mac client (Macbook Pro UK) (SiBook-Pro.local). On my Linux server I am using a USB Apple keyboard with the exact layout of the laptops keyboard (the compact UK aluminium keyboard). I would like to send the media keys to the Mac client at all times and I have attempted the following in my synergy.conf: keystroke(AudioPlay) = keystroke(AudioPlay,SiBook-Pro.local) This did not seem to work so I ran both the server and client as foreground processes and with debugging enabled and observed the following: Server Log: DEBUG1: activate actions DEBUG1: hotkey: keyDown(AudioPlay,SiBook-Pro.local) DEBUG1: onKeyDown id=57523 mask=0x0000 button=0x0000 DEBUG1: send key down to "SiBook-Pro.local" id=57523, mask=0x0000, button=0x0000 DEBUG1: deactivate actions DEBUG1: hotkey: keyUp(AudioPlay,SiBook-Pro.local) DEBUG1: onKeyUp id=57523 mask=0x0000 button=0x0000 DEBUG1: send key up to "SiBook-Pro.local" id=57523, mask=0x0000, button=0x0000 Client Log: DEBUG1: recv key down id=0x0000e0b3, mask=0x0000, button=0x0000 DEBUG1: mapKey e0b3 (57523) with mask 0000, start state: 0000 DEBUG1: key e0b3 is not on keyboard DEBUG1: recv key up id=0x0000e0b3, mask=0x0000, button=0x0000 DEBUG1: recv enter, 1279,386 5 2000 As you can see, the client claims the key received is not on keyboard. I don't understand since it is the same key as is on the Macbook's keyboard. I tried to reverse the client-server config to see if I could capture the key being sent if I pressed the Play button on the Macbook but the key doesn't seem to even make it to Synergy. Almost all keyboard presses get logged but the media keys seem to bypass the logs and just execute their function locally. E.g. I press play on the Macbook (with the Macbook as the server) and the key plays music on the Macbook and the key is not logged to the debug log.

    Read the article

  • Office Compatibility Pack and File Permissions

    - by hymie
    MS isn't my thing, so I hope somebody can give me a pointer. We have a Windows domain, with a Server-2003-SP1-Enterprise file server. One of the specific files is a MS Excel 2007 (XLSX) file created by user LK. In the "Security" preferences setting, about a half-dozen users (including me) have access to this file. LK is the owner and has "full control", while the rest of us have "Read" , "Read & Execute", and "Write" permission. LK is also the owner of the directory that this file resides in. I don't know if that's relevant. So far so good. My desktop machine has Windows XP SP3 , and Excel 2003 SP3 , and the "Office Compatibility Pack" which lets me read and write the new XLSX files. However, whenever I write the file, the permissions are changed. The newly-written file only has permissions for LK and me, and both are "Full control" So in short, what am I doing wrong, and how should I set this up to do it right, keeping the permissions on the file that were there when I started?

    Read the article

  • SWATCH - what am I doing wrong?

    - by Brian Dunbar
    What I want/need/desire is to log when a user logs into my FTP server. Problem: I can't make swatch work the way I should be able to. This data is logged to a file - but of course these logs are not kept very long. I can't keep the logs around forever, but I can extract data from then, analyze it, store results elsewhere. If there is a better way to do this than the following, I'm all ears. Swatch version 3.2.3 Perl 5.12 FTP: VSFTP OS (Test): OS X 10.6.8 OS (Production): Solaris From man I see I can pass contents to a command .. so I should be able to echo those values to file, do a sed/cut/uniq thing on them for stats. $ man swatch (snip) exec command Execute command. The command may contain variables which are substituted with fields from the matched line. A $N will be replaced by the Nth field in the line. A $0 or $* will be replaced by the entire line. Swatch file .swatchrc watchfor /OK LOGIN/ echo=red pipe "echo "0: $0 1:$1 2:$2 3:$3 4:$4 5:$5" >> /Users/bdunbar/dev/ftplog/output.txt" Launch with $ swatch -c /Users/bdunbar/.swatchrc --script-dir /Users/bdunbar/dev/ftplog -t /Users/bdunbar/dev/ftplog/vsftpd.log & Test echo "Mon July 9 03:11:07 2012 [pid 14938] [aetech] OK LOGIN: Client "206.209.255.227"" >> vsftpd.log Results - it's echoing to TTY. This is not needed or desired on the server, but it does tell me things are working. ftplog *** swatch version 3.2.3 (pid:25780) started at Mon Jul 9 15:23:33 CDT 2012 Mon July 9 03:11:07 2012 [pid 14938] [aetech] OK LOGIN: Client 206.209.255.227 Results - bad! I appear to not be sending the variables to text. $ tail -f output.txt 0: /Users/bdunbar/dev/ftplog/.swatch_script.25780 1: 2: 3: 4: 5:

    Read the article

  • What is wrong with those crontabs?

    - by Guillaume Boudreau
    I want my projectors to Power On before the mall opens, and Power Off when the mall closes. So I created crontab entries (that I placed in /etc/cron.d/mall), but today (Thu Nov 22 18:58:29 EST 2012 is the current date on that server), the power-off.sh script got executed at 17:20 (see syslog excerpt below). Being Thu, Nov. 22, I would have expected the power-off.sh script to be executed at 21:20, per the 4th crontab line below. Why did power-off.sh execute at 17:20? What is wrong with my crontab entries? Content of /etc/cron.d/mall: 40 9 22-30 Nov Mon-Sat myuser /usr/local/projectors/power-on.sh 40 10 22-30 Nov Sun myuser /usr/local/projectors/power-on.sh 20 18 22-30 Nov Mon-Wed myuser /usr/local/projectors/power-off.sh 20 21 22-30 Nov Thu-Fri myuser /usr/local/projectors/power-off.sh 20 17 22-30 Nov Sat-Sun myuser /usr/local/projectors/power-off.sh 40 9 1-22 Dec Mon-Sat myuser /usr/local/projectors/power-on.sh 40 10 1-22 Dec Sun myuser /usr/local/projectors/power-on.sh 20 21 1-22 Dec Mon-Fri myuser /usr/local/projectors/power-off.sh 20 17 1-22 Dec Sat-Sun myuser /usr/local/projectors/power-off.sh syslog excerpt: $ grep power-off.sh /var/log/syslog Nov 22 17:20:01 lanner-ubu-c2d CRON[23007]: (myuser) CMD (/usr/local/projectors/power-off.sh)

    Read the article

  • Scripted forwarding for Outlook 2003

    - by John Gardeniers
    We have a staff member in sales who has gone onto a 4 day week (getting ready for retirement), so each Thursday afternoon her email needs to be forwarded to another user and each Friday afternoon it needs to be set back. I'm using the VBS script below to do this, run via the Task Scheduler. Although the script appears to do it's job, based on what I see when I view the user's Exchange settings, Exchange doesn't always recognise that the setting has changed. e.g. Last Thursday the forwarding was a enabled and worked correctly. On Friday the script did it's thing to clear the forwarding but Exchange continued to forward messages all weekend. I found that I can force Exchange to honour the changed setting be merely opening and closing the user's properties in ADUC. Of course I don't want to have to do that. Is there a non-manual way I can have Exchange read and honour the setting? The script (VBS): ' Call this script with the following parameters: ' ' SrcUser - The logon ID of the suer who's account is to be modified ' DstUser - The logon account of the person to who mail is to be forwarded ' Use "reset" to clear the email forwarding SrcUser = WScript.Arguments.Item(0) DstUser = WScript.Arguments.Item(1) SourceUser = SearchDistinguishedName(SrcUser) 'The user login name Set objUser = GetObject("LDAP://" & SourceUser) If DstUser = "reset" then objUser.PutEx 1, "altRecipient", "" Else ForwardTo = SearchDistinguishedName(DstUser)' The contact common name objUser.Put "AltRecipient", ForwardTo End If objUser.SetInfo Public Function SearchDistinguishedName(ByVal vSAN) Dim oRootDSE, oConnection, oCommand, oRecordSet Set oRootDSE = GetObject("LDAP://rootDSE") Set oConnection = CreateObject("ADODB.Connection") oConnection.Open "Provider=ADsDSOObject;" Set oCommand = CreateObject("ADODB.Command") oCommand.ActiveConnection = oConnection oCommand.CommandText = "<LDAP://" & oRootDSE.get("defaultNamingContext") & ">;(&(objectCategory=User)(samAccountName=" & vSAN & "));distinguishedName;subtree" Set oRecordSet = oCommand.Execute On Error Resume Next SearchDistinguishedName = oRecordSet.Fields("DistinguishedName") On Error GoTo 0 oConnection.Close Set oRecordSet = Nothing Set oCommand = Nothing Set oConnection = Nothing Set oRootDSE = Nothing End Function

    Read the article

  • Configuring a MySQL 5.1 Instance on Windows 7 Professional x64 Fails

    - by Thomas Owens
    I'm trying to set up my laptops to function as mobile development environments. Installing the software on my Linux machine and getting it configured was fairly straightforward, however I'm having trouble getting MySQL 5.1 Server installed and configured on Windows 7 Professional 64-bit. I'm currently using the Windows MSI Installer for the complete MySQL 5.1 system (as opposed to the Essentials installer also available). I've tried to install using both the 32-bit and 64-bit versions of MySQL 5.1 - the same events occur in both. I've installed both the Server Instance Configuration Wizard and Workbench and everything appears to be installed just fine. When I open the Instance Configuration Wizard, I select Detailed Configuration. On the next screen, I select Development Environment, then Multifunctional Database on the next screen. I leave the InnoDB settings unchanged. I select Manual Setting with 5 concurrent connections. I enable TCP/IP Networking on Port 3306 and Enable Strict Mode. I select the Standard Character Set. I check the boxes for Install as a Windows Service (and provide the name "MySQL") and Include the Bin Directory in Windows PATH. On the next screen, I set my root user name and password. I do not enable root access from remote machines and I also do not create an anonymous account. On the final screen of the wizard, when I click "Execute", the first two tasks (Prepare Configuration and Write Configuration File) complete. However, when it reaches Start Service, the wizard hangs and becomes unresponsive ("Not Responding" appears in the title bar and Task Manager). I would really like to be able to use both my Windows and Linux laptops as full-blown mobile development environments, but I can't do that without being able to run MySQL. Has anyone encountered this problem before? What options do I have to correct it?

    Read the article

  • Slow tracepath on local LAN

    - by Simone Falcini
    I am on EXSi and I have 2 instances: Ubuntu and CentOS. These are the network configurations Ubuntu eth0 Link encap:Ethernet HWaddr 00:50:56:00:1f:68 inet addr:212.83.153.71 Bcast:212.83.153.71 Mask:255.255.255.255 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:76059 errors:0 dropped:26 overruns:0 frame:0 TX packets:7224 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:6482760 (6.4 MB) TX bytes:2080684 (2.0 MB) eth1 Link encap:Ethernet HWaddr 00:0c:29:46:5a:f2 inet addr:192.168.1.1 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:252 errors:0 dropped:0 overruns:0 frame:0 TX packets:608 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:42460 (42.4 KB) TX bytes:82474 (82.4 KB) /etc/iptables.conf *nat :PREROUTING ACCEPT [142:12571] :INPUT ACCEPT [5:1076] :OUTPUT ACCEPT [8:496] :POSTROUTING ACCEPT [8:496] -A POSTROUTING -s 192.168.1.0/24 -o eth0 -j MASQUERADE COMMIT *filter :INPUT ACCEPT [2:72] :FORWARD ACCEPT [4:336] :OUTPUT ACCEPT [6:328] -A INPUT -i eth1 -p tcp -j ACCEPT -A INPUT -i eth1 -p udp -j ACCEPT -A INPUT -i eth0 -p tcp --dport ssh -j ACCEPT COMMIT CentOS eth0 Link encap:Ethernet HWaddr 00:0C:29:74:1C:55 inet addr:192.168.1.2 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe74:1c55/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:499 errors:0 dropped:0 overruns:0 frame:0 TX packets:475 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:68326 (66.7 KiB) TX bytes:82641 (80.7 KiB) The main problem is that if i execute this command from the CentOS instance ssh 192.168.1.2 it takes more than 20s to connect. It seems like it's routing the connection to the wrong network. What could it be? Thanks!

    Read the article

  • Effects of internet connection speeds on server queries

    - by SephMerah
    Can my internet connection significantly effect queries run on phpmyadmin? I am currently 18 down and 30 up. I switched internet connections today and noticed a deep drop in query performance. The query that I am running is SELECT * FROM table. Simple. The table has one row of data. The MySQL server is on the same server as everything else. It is a VPS. Godaddy hosts. I dont have any other information. Centos 6.3 MySQL 5.1 PhpMyAdmin 3.4 Okay used google tools to inspect the XHR going out and coming in and this is what it reported. {"success":true,"message":"<div class=\"success\">Your SQL query has been executed successfully ( Query took 0.0033 sec )<\/div>","sql_query":"<div id=\"result_query\" align=\"\">\n<div class=\"success\">Your SQL query has been executed successfully ( Query took 0.0033 sec ) SNIP..................."}. So apparently my server is fine. The strange thing is though.. The returned XHR comes back exactly as soon as I execute the query on the page. It comes back within less than a second. Why PhpMyadmin does not report the change immediately. I am going to try a re-install.

    Read the article

  • Nginx deny doesn't work for folder files

    - by user195191
    I'm trying to restrict access to my site to allow only specific IPs and I've got the following problem: when I access www.example.com deny works perfectly, but when I try to access www.example.com/index.php it returns "Access denied" page AND php file is downloaded directly in browser without processing. I do want to deny access to all the files on the website for all IPs but mine. How should I do that? Here's the config I have: server { listen 80; server_name example.com; root /var/www/example; location / { index index.html index.php; ## Allow a static html file to be shown first try_files $uri $uri/ @handler; ## If missing pass the URI to front handler expires 30d; ## Assume all files are cachable allow my.public.ip; deny all; } location @handler { ## Common front handler rewrite / /index.php; } location ~ .php/ { ## Forward paths like /js/index.php/x.js to relevant handler rewrite ^(.*.php)/ $1 last; } location ~ .php$ { ## Execute PHP scripts if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss expires off; ## Do not cache dynamic content fastcgi_pass 127.0.0.1:9001; fastcgi_param HTTPS $fastcgi_https; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; ## See /etc/nginx/fastcgi_params } }

    Read the article

  • Apple: Bind a key to a commandline command?

    - by Stefan Lasiewski
    I have a Mac Powerbook running Leopard (10.5.8). Does Leopard provide an easy way to bind keys to commands which are typically run on the commandline? For example, I can open up Terminal.app and run the command /System/Library/Frameworks/ScreenSaver.framework/Resources/ScreenSaverEngine.app/Contents/MacOS/ScreenSaverEngine which will activate the screensaver and lock my screen. What if I want to bind 'Apple-key L' to this command and execute this globally, regardless of which application is in use at the moment? Can I do this, or can I only run ScreenSaverEngine from a Terminal window? I tried to set up global keyboard shortcuts, but it seems that this won't allow me to bind a key to an arbitrary shell command: Note: You can create keyboard shortcuts only for existing menu commands. You cannot define keyboard shortcuts for general purpose tasks such as opening an application or switching between applications. I tried to set up a application keyboard shortcut, but commands like ScreenSaverEngine don't seem to be an application. Note that this Screensaver/Lock screen is just one example. I have come across other nifty commands which I might want to bind to a key-combination as well. I can do this in Gnome and Windows (with varying success). How about with Leopard?

    Read the article

  • Storing secure keys on Ubuntu web server

    - by Sencha
    I'm running Ubuntu 12.04 Precise with a DUNG (Django, Unix, Nginx & Gunicorn) environment and my app (as well as various config files) is stored in a python virtual environment inside /srv, which the www-data user has access to. The nginx & gunicorn processes are all run as www-data. My web app requires secure credentials which I am storing in an environment.sh file. This file contains various exports and is run using source before the gunicorn processes execute. My concern is the location of the environment.sh file and it's permissions. Will it be okay storing this file inside the /srv folder where the www-data has access to it? Or should it be stored and owned by root somewhere else such as /var/myapp/environment.sh? Also, regarding the www-data user, if any of my web processes (which are run as www-data) are compromised and someone gains access to them, does that mean that the user could potentially read any file on the system, even if they can't write? Including my secure keys?

    Read the article

  • Install Exchange 2013 with DSC

    - by Alain Laventure
    I tried to install Exchange 2013 with the resource windowsProcess in existing Exchange Configuration. All prerequisites are installed (the Exchange Organization still exists). This is my Resource section: WindowsProcess Exchange2013 { Credential=$credential Path= "C:\Sources\Cumulative Update 5 for Exchange Server 2013 (KB2936880)\Setup.exe" Arguments= "/mode:Install /role:Mailbox /IAcceptExchangeServerLicenseTerms /TargetDir:C:\EX2013" Ensure= "Present" } #End Filter } #End Node } # End configuration /* @TargetNode='TargetDSC02' @GeneratedBy=exadmin @GenerationDate=08/02/2014 08:16:03 @GenerationHost=SOURCEDSC02 */ instance of MSFT_Credential as $MSFT_Credential1ref { Password = "Password1"; UserName = "S05\\Exadmin"; }; Exadmin is a member of Orgaganization Management Group and it is also member of Domain Admin Group, to be able to install Exchange When I execute this resource , Exchange Installation Start but after 1 minute the installation stops with this error: Failed [Rule:GlobalServerInstall] [Message:You must be a member of the 'Organization Management' role group or a member of the 'Enterprise Admins' group to continue.] To be sure that the right is really the problem I create a special User with only Administrator right of the Exchange server and with no Exchange Permission I run manually on the new Exchange server .\Setup.exe /mode:Install /role:Mailbox /IAcceptExchangeServerLicenseTerms /Targetdir:C:\EX2013 And I got the Same error that with DSC. After I add my test user in the Organization Management Group and I run again manually .\Setup.exe /mode:Install /role:Mailbox /IAcceptExchangeServerLicenseTerms /Targetdir:C:\EX2013 And the Exchange 2013 installation finish without any error. That prove that the problem with DSC is Permission right.

    Read the article

  • Load balancing with puppet

    - by Gonçalo Queirós
    Hi there. Im trying to setup a loadbalancing system. My load balancer (nginx) has a conf file where i should list all IP's of the upstream servers. I could put the IP's on the conf manually, but this ways i would need to change the conf file every time i add/remove an upstream server. For now i came up with two different ideas, but i don't like much of neither: 1 - Have every upstream machine to use Exported Resources to create a file with it's IP..Then the load balancer server will have an "include conf_directory/*", and load all the files created by the upstrem servers. Since the load balancer is using nginx this can be done, but if i wan't latter on to configure something that doesn't have the "include" on the conf files, this solution will not work. 2 - If the config doesn't support the "include" command, then we could have again, every upstream server use the Exported Resources to create a filw with its IP, and latter on, the load balancer execute a command that would pick every file and generate the config Both versions addopt the same techinque, the difference is that version 2 is used when the server (that needs to have a conf generated) doesn't recognize a command like "include" inside its own conf. Now, my question is, is there any way to do this in a different form? I suspect that there is, since puppet is made to manage multiple servers, it seems a bit strange not have a easy way to configure load balancers.

    Read the article

  • Windows 7 scheduled task returns 0x2

    - by demmith
    I have identical scheduled tasks running in Windows XP Pro and Windows 7. The XP Pro one runs fine, the Windows 7 one always returns 0x2 (which means, "The system cannot find the file specified"; however, executing from the command line is no problem) in the Last Run Result column of the Task Scheduler UI. The scheduled task executes a .bat file daily. The .bat file contains a call to execute a Perl script. As I stated in the previous paragraph, it executes under XP without any trouble but under Windows 7, no dice. The task under Windows 7 is set to "run whether the user is logged on or not." In this case it is me, I am the only user of the system. It is also set to "Run with highest privileges." And it is not hidden. The .bat file executes perfectly well from the command line - it calls the Perl script as expected and the Perl script does its thing. I have searched far and wide looking for an appropriate answer to this issue. So far I have found nothing. What the devil is going on with this Win7 scheduled task? I am ready to pull my hair out.

    Read the article

  • CentOS: Init scripts failing to start for some unknown reason

    - by user705142
    I'm running CentOS 6.2 - I've just migrated some applications over to a failover server, and copied their init scripts into /etc/init.d. I've made them executable, added them to chkconfig, with chkconfig -add, set their levels, made sure they're residing in /etc/rc.d/ - made sure I can execute them from rc2.d etc. The permissions are the same on both servers. They're also running in the same order as on the primary server Yet on reboot they don't start. Any ideas? The offenders are these: jetty 0:off 1:off 2:on 3:on 4:on 5:on 6:off smart 0:off 1:off 2:on 3:on 4:on 5:on 6:off /etc/init.d: -rwxr-xr-x. 1 root root 14456 Mar 13 20:21 jetty -rwxrwxrwx. 1 root root 5829 Mar 29 09:58 smart /etc/rc.d/rc3.d lrwxrwxrwx. 1 root root 15 Mar 29 19:21 S99jetty -> ../init.d/jetty lrwxrwxrwx. 1 root root 11 Mar 26 17:12 S99local -> ../rc.local lrwxrwxrwx. 1 root root 15 Mar 29 19:21 S99smart -> ../init.d/smart I've checked, and I'm in run level 3. I've checked their logs, and there's no indication that that they've been started. I can start them manually easily - and other services are starting normally. I'm completely out of ideas really.

    Read the article

< Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >