Search Results

Search found 24010 results on 961 pages for 'source rar'.

Page 880/961 | < Previous Page | 876 877 878 879 880 881 882 883 884 885 886 887  | Next Page >

  • Script to check a shared Exchange calendar and then email detail

    - by SJN
    We're running Server and Exchange 2003 here. There's a shared calendar which HR keep up-to-date detailing staff who are on leave. I'm looking for a VB Script (or alternate) which will extract the "appointment" titles of each item for the current day and then email the detail to a mail group, in doing so notifying the group with regard to which staff are on leave for the day. The resulting email body should be: Staff on leave today: Mike Davis James Stead @Paul Robichaux - ADO is the way I went for this in the end, here are the key component for those interested: Dim Rs, Conn, Url, Username, Password, Recipient Set Rs = CreateObject("ADODB.Recordset") Set Conn = CreateObject("ADODB.Connection") 'Configurable variables Username = "Domain\username" ' AD domain\username Password = "password" ' AD password Url = "file://./backofficestorage/domain.com/MBX/username/Calendar" 'path to user's mailbox and folder Recipient = "[email protected]" Conn.Provider = "ExOLEDB.DataSource" Conn.Open Url, Username, Password Set Rs.ActiveConnection = Conn Rs.Source = "SELECT ""DAV:href"", " & _ " ""urn:schemas:httpmail:subject"", " & _ " ""urn:schemas:calendar:dtstart"", " & _ " ""urn:schemas:calendar:dtend"" " & _ "FROM scope('shallow traversal of """"') " Rs.Open Rs.MoveFirst strOutput = "" Do Until Rs.EOF If DateDiff("s", Rs.Fields("urn:schemas:calendar:dtstart"), date) >= 0 And DateDiff("s", Rs.Fields("urn:schemas:calendar:dtend"), date) < 0 Then strOutput = strOutput & "<p><font size='2' color='black' face='verdana'><b>" & Rs.Fields("urn:schemas:httpmail:subject") & "</b><br />" & vbCrLf strOutput = strOutput & "<b>From: </b>" & Rs.Fields("urn:schemas:calendar:dtstart") & vbCrLf strOutput = strOutput & "&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<b>To: </b>" & Rs.Fields("urn:schemas:calendar:dtend") & "<br /><br />" & vbCrLf End If Rs.MoveNext Loop Conn.Close Set Conn = Nothing Set Rec = Nothing After that, you can do what you like with srtOutput, I happened to use CDO to send an email: Set objMessage = CreateObject("CDO.Message") objMessage.Subject = "Subject" objMessage.From = "[email protected]" objMessage.To = Recipient objMessage.HTMLBody = strOutput objMessage.Send S

    Read the article

  • How do I (robustly) remotely execute tasks on Windows workstations in a domain?

    - by Zac B
    I'm not even sure if "robustly" is a word. Anyway. Context: We have a few hundred Windows 7 workstations on a LAN. We use AD/GPO management pretty heavily, but there are a lot of periodic and/or manual maintenance tasks we need to do that can't be done via GPO/scheduled task. For example, say I want to execute program X (which runs silently, in the background, and doesn't bother the user) on workstation Y, or say I want to execute task A on a workstation group B either on a schedule or on demand. Kicking the users off of their computers to do this (i.e. using RDP) is a no-no, and doesn't work on groups anyway. Question: What's the best way to do this that is robust enough that, after setup, I could give it to beginner support people (read: people who are phobic of the command line, and get confused with GUI interfaces more complicated than Firefox)? I'm a competent programmer, and, if there is a robust set of tools or framework out there for this type of task, I'd consider hacking something together myself if it didn't take too long. If there's some combination of tools or techniques that others use to make remote-workstation-administration doable by beginners, I have yet to find it. For those who care about the "why": I'm midlevel IT, and was told to implement a remote management solution that allows arbitrary/scheduled remote execution, with confirmation that programs actually ran remotely, and the ability to view what they returned. "Why?" I asked, "Can't I just use PsExec and the task scheduler on a dispatcher machine?" "No," I was told, "'Joe' the second-week tech is going to be in charge of this one, and he needs something simple with a GUI." What I've tried: I've played with making a bunch of one-clickable "transfer files to remote computer and run them with PsExec" batch/VB scrips, but those tend to break down and don't easily support running on customizable groups. I've played a little bit with the Windows version of Puppet, but it doesn't support arbitrary-time remote execution (it's ability to group computers into a tree/node structure is really nice though). I've used an older version of Altiris, and, while it does a lot of what I want, it's interface is awful, it's slow, crashes a lot, and is probably too expensive for management. SwiftWater's DMS solution does some of what I want, but it's very underdeveloped, closed-source (not a deal breaker but not ideal), and I get the impression that support and reliability are lacking.

    Read the article

  • Biztalk 2009 install help

    - by _pemex99_
    Hi, I try to install Biztalk2009, with SQL 2008R2CTPNov, on Win Server 2008. I'm blocked at the configuration step "groups" : [19:22:18 Info Configuration Framework]Configuring feature: WMI [19:22:18 Info BtsCfg] Entering function: CBtsCfg::ConfigureFeature [19:22:18 Info BtsCfg] Configuring feature: WMI [19:22:18 Info BtsCfg] Entering function: CBtsCfg::IsSelectedAnswer [19:22:18 Info BtsCfg] Leaving function: CBtsCfg::IsSelectedAnswer [19:22:18 Info BtsCfg] Entering function: CWMI::Connect [19:22:18 Info BtsCfg] WMI is already connected [19:22:18 Info BtsCfg] Leaving function: CWMI::Connect [19:22:18 Info ConfigHelper] NT group BizTalk Server Operators was not created because it already exists [19:22:18 Info ConfigHelper NetAPI Info: ] Le groupe local spécifié existe déjà. [19:22:18 Info ConfigHelper] NT group BizTalk Server Administrators was not created because it already exists [19:22:18 Info ConfigHelper NetAPI Info: ] Le groupe local spécifié existe déjà. [19:22:18 Info BtsCfg] Entering function: CWMI::CreateGroup 2010-01-14 19:22:18:0527 [INFO] WMI CWMIInstProv::PutInstance() try to acquire lock 2010-01-14 19:22:18:0539 [INFO] WMI CWMIInstProv::PutInstance() lock acquired successfully 2010-01-14 19:22:18:0546 [INFO] WMI CWMIInstProv::VerifyMgmtDbCompatibility(CInstance) started 2010-01-14 19:22:18:0553 [INFO] WMI CWMIInstProv::VerifyMgmtDbCompatibility(CInstance) finished successfully 2010-01-14 19:22:18:0564 [INFO] WMI CWMIInstProv::PutInstance(MSBTS_GroupSetting.MgmtDbName="BizTalkMgmtDb",MgmtDbServerName="ECTXEVLBZTK") started 2010-01-14 19:22:18:0572 [INFO] WMI CAdapter::ConvertWMI2Admin() started 2010-01-14 19:22:18:0581 [INFO] WMI CDataContainer::SetWCHAR() - Possible problem: item value is overwritten 2010-01-14 19:22:18:0591 [INFO] WMI CAdapter::ConvertWMI2Admin() finished with HR=0 2010-01-14 19:22:18:0611 [INFO] WMI QueryStringValue query regkey 'MgmtDBServer' 2010-01-14 19:22:18:0620 [INFO] WMI CAdmCoreGroupInst::TryCreateNewGroup() started 2010-01-14 19:22:18:0632 [INFO] WMI Creating Mgmt database... 2010-01-14 19:22:18:0641 [INFO] WMI Calling CDataSource.Open() against ECTXEVLBZTK\master 2010-01-14 19:22:18:0792 [INFO] WMI CDataSource.Open() returned 2010-01-14 19:22:18:0810 [WARN] AdminLib GetBTSMessage: hrErr=80040e1d; Msg=Error "0x80040E1D" occurred.; 2010-01-14 19:22:18:0824 [WARN] AdminLib GetBTSMessage: hrErr=c0c02524; Msg=Failed to create Management database "BizTalkMgmtDb" on server "ECTXEVLBZTK". Error "0x80040E1D" occurred.; 2010-01-14 19:22:18:0835 [ERR] WMI Failed in pAdmInst->Create() in CWMIInstProv::PutInstance(). HR=c0c02524 2010-01-14 19:22:18:0846 [ERR] WMI WMI error description is generated: Failed to create Management database "BizTalkMgmtDb" on server "ECTXEVLBZTK". Error "0x80040E1D" occurred. 2010-01-14 19:22:18:0860 [INFO] WMI CWMIInstProv::PutInstance() finished. HR=c0c02524 [19:22:18 Error BtsCfg] f:\bt\890\private\source\setup\prod\btssetup\btscfg\btswmi.cpp(358): FAILED hr = c0c02524 [19:22:18 Error BtsCfg] Failed to create Management database "BizTalkMgmtDb" on server "ECTXEVLBZTK". Error "0x80040E1D" occurred. It seems that the install can't create Managment database, But the SSO database is created OK... Has someone a clue ?

    Read the article

  • Win-XP Browsers Hang on page load - (waiting for...)

    - by CHarmon
    Hello, I’m having problems with my browsers hanging on loading pages on my desktop machine. I’m using Windows XP Pro with SP3 and fully updated except for IE 8. All three of my browsers, IE 7, Chrome and Firefox are having the same problems. Pages are not being loaded and are hanging on “waiting for …”. The browsers are waiting for the page being loaded or ad servers. Sometimes a page will load but the loading graphic continues to be displayed as if the page were still loading when the page appears to be fully loaded. The problem is bad enough that I can’t really use any of my browsers. I can eventually get most pages to load by stopping and restarting the page load. I have DSL modem with a wireless router and I have been able to eliminate the modem and router from being the source of my problem. My laptop doesn’t have any problems even when hardwired to the router and with the wireless connection disabled. I deleted the NIC and let XP re-install. Also tried a different network cable. Tried the same router port used in the laptop test. One clue that may be important is that I can’t connect to my router using the desktop machine…the page hangs while trying to connect. I can ping the router and I can quickly connect to the router using the laptop. I also can’t use the Windows update process – the page never fully loads. The problem affects other user accounts and even happens in safe mode. I am convinced the problem is with part of the O/S…some layer able to affect all of the browsers. The purpose of this post is to see if anyone has some ideas before I do a XP repair. I have done quite a bit of trouble-shooting: Ran a full anti-virus scan with AVG – no problems. Ran full scans with Spybot, MalwareBytes and Sophos anti-rootkit – no problems. Ran Chkdsk with both options checked. Ran Disk Clean up Defragged RE-installed IE7 Cleared all the browser caches Ran Ccleaner (registry tool) Ran HijackThis – nothing unusual (problem happens in safe mode too) Ran Process Explorer – no unusual processes Used System Restore and fell back several days – no change in the problem Booted to last known good configuration – no change in the problem Ran MicrosoftFixit50199.msi – no change in the problem Any ideas or suggestions would be appreciated…I’m not looking forward to doing a repair on XP. Thanks in advance for any help.

    Read the article

  • System failure - need diagnostic recommendation

    - by Ladislav Mrnka
    I have big problem with my computer. Configuration is: Intel i7 + 6x2GB OCZ DDR3 Motheboard: Asus P6T Deluxe V2, HDD controller configured to AHCI Main drive: OCZ Vertex 2 (SSD) - contains all installed programs and system Second drive: Samsung SpinPoint - contains User profiles, ProgramData, virtual machines and databases Third drive: Samsung SpinPoint - data drive + backups OS: Windows 7 Ultimate x64 I have never had any problem with this computer until now. During weekend my computer completely crashed without any reason. Each time I tried to boot to Windows I got BSOD with message BAD_SYSTEM_CONFIG_INFO and automatic restart (I didn't install any new SW or HW). But after restart main OCZ drive was disconnected (not detected by BIOS). When I turned off and on computer, the drive was again connected. It also happend every single time I tried to repair installation somehow. It ended with some error and after restart drive was disconnected. The only thing which worked was format + fresh install. After installing almost everything I wanted to install Visual Studio 2010 Ultimate (complete installation without SQL Server Express). During installation of VS itself I always get BSOD - it is too fast so I'm not able to read description. After restart it searches for all disk drives for really long time and sometimes it changes boot drive so the system is not able to start - Bootmgr not found. After reconfiguring BIOS the system starts. There is no event describing the failure in Event viewer. Installing VS 2010 is absolutely necessary for me. I need help with diagnostic. I need to find where is the problem - I expect that the problem is in OCZ drive or in HDD controller on motherboard but I don't know how to find it. All components still have valid warranty. Can you recommend me some approach or tools to find the problem? Edit: I'm still looking for source of the problem. New information is that Windows are not able to perform check disk (Chkdsk) on the SSD system drive. After restarting it always starts checking drive and in part where files are checked it fails with BSOD - BAD_SYSTEM_CONFIG_INFO. After next restart and skipping check disk tests the system runs.

    Read the article

  • Moving default web site to another drive

    - by Chadworthington
    I set the default location from c:\inetpub\wwwroot to d:\inetpub\wwwroot but when I access my .NET 4.0 site get this error: Description: An error occurred during the processing of a configuration file required to service this request. Please review the specific error details below and modify your configuration file appropriately. Parser Error Message: Unrecognized attribute 'targetFramework'. Note that attribute names are case-sensitive. Source Error: Line 105: Set explicit="true" to force declaration of all variables. Line 106: --> Line 107: <compilation debug="true" strict="true" explicit="true" targetFramework="4.0"> Line 108: <assemblies> Line 109: <add assembly="System.Web.Extensions.Design, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> When I try to Manage the Basic Settings on the Site and click the "Test Settings" button, I see that I have a problem under "authorization:" The server is configured to use pass-through authentication with a built-in account to access the specified physical path. However, IIS Manager cannot verify whether the built-in account has access. Make sure that the application pool identity has Read access to the physical path. If this server is joined to a domain, and the application pool identity is NetworkService or LocalSystem, verify that <domain>\<computer_name>$ has Read access to the physical path. Then test these settings again. 1) Do I need to grant rights to IIS to the new folder? Which user? I thought it was something like IIS_USER or something similar but I cannot determine the correct name of the user. 2) Also, do I need to set the default version of the framework somewhere at the Default Site level or at the Virtual folder level? How is this done in IIS6, I am used to IIS5 or whatever came with XP Pro. 3) My original site had a subfolder under wwwroot called "aspnet_client." How was this cleated? I manually copied it to the corresponding new location. My app was using seperate ASP specific databases for storing session state and role info, if that is relevant. Thanks

    Read the article

  • How to tune system settings for mongoDB on Linux?

    - by jsh
    Trying to squeeze a lot out of one question here -- please bear with me. Although the MongoDB man pages make several useful recommendations about system settings like ulimit (http://docs.mongodb.org/manual/reference/ulimit/), and other production factors (http://docs.mongodb.org/manual/administration/production-notes/) they seem mysteriously silent on things like virtual memory and swap settings. The closest we get to a hint is that "...the operating system’s virtual memory subsystem manages MongoDB’s memory..." (http://docs.mongodb.org/manual/faq/fundamentals/#does-mongodb-require-a-lot-of-ram). Running the same job - high writes and high reads on about 10,000,000 records in a single collection -- on my 4-processor, 4GB RAM macbook and an 8-core ubuntu box with 64GB RAM I saw dramatically WORSE read performance on the linux box with factory settings, and could hear the disk constantly spinning, indicating high I/O and presumably swapping. Yes, other things were happening on the box, but there was plenty of free RAM, disk space, etc.; furthermore, I did not see evidence that Mongo was expanding to take advantage of all that free RAM as it is touted to do. Linux box default settings were as follows: vm.swappiness =60 vm.dirty_background_ratio = 10 vm.dirty_ratio = 20 vm.dirty_expire_centisecs =3000 vm.dirty_writeback_centisecs=500 I hazarded some guesses looking at docs and blogs for other types of databases (Oracle, MYSQL, etc.), experimented, and adjusted as below. vm.swappiness=10 vm.dirty_background_ratio=5 vm.dirty_ratio=5 vm.dirty_writeback_centisecs=250 vm.dirty_expire_centisecs=500 I saw some immediate apparent improvements in read time. However, when I ran my test jobs again, read performance continued to be painfully sluggish during heavy writes. Then, I REBUILT the collection from an available data source - and suddenly I can read at 1ms or less per record WHILE doing the write job! So the question is really two-fold: 1) What are appropriate VM settings for MongoDB on Linux? 2) (bonus) Does Mongo do some checking or optimization with the OS while data is being built? In other words, if I have built a large data set with suboptimal VM or I/O settings, does Mongo make assumptions during the memory-mapping process that will fail to take advantage of optimizations down the road? Obviously I don't fully grok memory mapping under the hood (I was hoping I wouldn't have to). Any help appreciated...thanks! -j

    Read the article

  • Puppet and launchd services?

    - by Joel Westberg
    We have a production environment configured with Puppet, and want to be able to set up a similar environment on our development machines: a mix of Red Hats, Ubuntus and OSX. As might be expected, OSX is the odd man out here, and sadly, I'm having a lot of trouble with getting this to work. My first attempt was using macports, using the following declaration: package { 'rabbitmq-server': ensure => installed, provider => macports, } but this, sadly, generates the following error: Error: /Stage[main]/Rabbitmq/Package[rabbitmq-server]: Could not evaluate: Execution of '/opt/local/bin/port -q installed rabbitmq-server' returned 1: usage: cut -b list [-n] [file ...] cut -c list [file ...] cut -f list [-s] [-d delim] [file ...] while executing "exec dscl -q . -read /Users/$env(SUDO_USER) NFSHomeDirectory | cut -d ' ' -f 2" (procedure "mportinit" line 95) invoked from within "mportinit ui_options global_options global_variations" Next up, I figured I'd give homebrew a try. There is no package provider available by default, but puppet-homebrew seemed promising. Here, I got much farther, and actually managed to get the install to work. package { 'rabbitmq': ensure => installed, provider => brew, } file { "plist": path => "/Library/LaunchDaemons/homebrew.mxcl.rabbitmq.plist", source => "/usr/local/opt/rabbitmq/homebrew.mxcl.rabbitmq.plist", ensure => present, owner => root, group => wheel, mode => 0644, } service { "homebrew.mxcl.rabbitmq": enable => true, ensure => running, provider => "launchd", require => [ File["/Library/LaunchDaemons/homebrew.mxcl.rabbitmq.plist"] ], } Here, I don't get any error. But RabbitMQ doesn't start either (as it does if I do a manual load with launchctl) [... snip ...] Debug: Executing '/bin/launchctl list' Debug: Executing '/usr/bin/plutil -convert xml1 -o /dev/stdout /Library/LaunchDaemons/homebrew.mxcl.rabbitmq.plist' Debug: Executing '/usr/bin/plutil -convert xml1 -o /dev/stdout /var/db/launchd.db/com.apple.launchd/overrides.plist' Debug: /Schedule[weekly]: Skipping device resources because running on a host Debug: /Schedule[puppet]: Skipping device resources because running on a host Debug: Finishing transaction 2248294820 Debug: Storing state Debug: Stored state in 0.01 seconds Finished catalog run in 25.90 seconds What am I doing wrong?

    Read the article

  • git, egit, submodules, and symlinks -- how should shared sub-projects be handled in eclipse?

    - by Autophil
    Here's the situation. I have a few git projects with a directory structure layed out more or less like this: simpleproj app www admin demo lib model orm view model user blah ... storeproj app www about mobile fbapp lib model orm view model user message cart product merchant Each directory in "lib" contains a separate project, either created in-house or forked, all of which use git for source control. So I figured I should make them submodules of my projects, right? Well, we've been moving toward eclipse + egit, because some of our windows guys not used to a CLI need something they can use without being scared of screwing things up. Anyway, the problem is, egit doesn't support submodules. So, my solution has been a rather crude one involving symlinks... lets say my directory structure on my dev box is generally layed out like this: ~/projects/ bigproj .git app lib model (- ~/lib/model/src/) orm (- ~/lib/orm/src/) neatproj .git app lib view (- ~/lib/view/src/) oldproj .git app lib orm (- ~/lib/orm/src/) ~/lib/ model .git src README.md orm .git src COPYING view .git src ...the symlinks link inside of directory with the git repo, so eclipse doesn't get confused, and everything sort of works. On my machine, I can update the libs from anywhere and all projects will be updated (needing to be committed again of course). Each project stores a separate copy of the contents of the symlinked directories within "lib" -- but only when staged from within eclipse. After committing from eclipse and moving back to the CLI, git sees that a bunch of files have been removed and a few symlinks have been created. Of course this is acceptable also, probably more so than keeping a separate history of the libs for each project... but eclipse and CLI git obviously need to be on the same page so tons of files aren't vanishing and reappearing. So this brings me to my question. I'd like to know how to either: get eclipse+egit to see the symlinks as symlinks if git will somehow handle them properly*, or get the CLI git to treat them as non-symlinks. Or, if there's a better way to do this, I'm all ears. Hope this all made sense! :D Note: tried to tag this as git-submodules, but was not allowed :( * should I make them relative or absolute? Either way it's a mess. Also will symlinks will work on windows? i know there's something similar but you need a 3rd party tool to manage them AFAIK, i doubt these would translate well.

    Read the article

  • Problems with MGCP proxy creation

    - by Popof
    Hi, I'm trying to bypass my ISP router with my FreeBSD server (I've an optical connection so I've a RJ45 used to connect the box to WAN) Internet and TV are working fine (Using igmpproxy to forward TV stream) but I've a problem with phone. ISP's box is connected to the server which gives it a LAN address. The problem is that when the box builds MGCP packets (and especially SDP ones) it uses its LAN address. So I've think of writing an UDP proxy to handle MGCP and SDP packets in order to replace LAN address with server WAN address and then forward packet to WAN. Before starting coding I've captured stream packets using my server as a bridge between WAN connection and the ISP's box. And, in order to see if my solution is viable, I've tried to send those packets to the box using nemesis. I tried to send a packet (found in capture) containing an endpoint audit: AUEP 1447 aaln/[email protected] MGCP 1.0 F: A In the wireshark capture the box replied: 200 1447 OK A: a:PCMU;PCMA;G726-16;G726-24;G726-32;G726-40;G.723.1-5.3;G.723.1-6.3;G729;TELEPHONE-EVENT, fmtp:"TELEPHONE-EVENT 0-15,144,149,159", p:10-30, b:4-40, e:on, t:00, s:on, v:L;M;G;D, m:sendonly;recvonly;sendrecv;inactive;confrnce;replcate;netwtest;netwloop, dq-gi But when I use nemesis, I got an ICMP error: Port unreachable (Type 3, Code 3). To build this packet, WAN source address of the capture is replaced with my server LAN address, using the mgcp-callagent port (2727) and the packet is sent to the LAN address of the box at mgcp-gateway port (2427). The command I use is nemesis udp -S 192.168.2.1 -D 192.168.2.2 -x 2727 -y 2427 -P packet_to_send. I also tried an UDP scan to the box on callagent and gateway port: PORT STATE SERVICE 2727/udp open|filtered unknown 2427/udp closed unknown I found those results a little bit strange because it should be the 2427 port opened, as it was in capture. Internet Protocol, Src: <ISP MGCP Server>, Dst: <My WAN Address> User Datagram Protocol, Src Port: mgcp-callagent (2727), Dst Port: mgcp-gateway (2427) Does someone has any idea about how having my box responding to my requests ? Thanks in advance and sorry for my english.

    Read the article

  • One user sometimes gets an unknown certificate error opening Outlook

    - by Chris
    Let me clarify a little. This isn't an unknown certificate error it's an unknown certificate error in so much as I can't figure out where the certificate comes from. This happens on a Win 7 Enterprise machine connecting to Exchange 2010 with Outlook 2010. The error he gets is that the root is not trusted because it's a self-signed cert. Take a look at this screenshot because even if I had generated this myself I wouldn't have put "SomeOrganizationalUnit" or "SomeCity" or "SomeState", etc. (Red block covers our domain name.) I'm a little concerned this is a symptom of a security breach. Exchange 2010 has three certificates installed but none of them are this certificate. They all have different expiration dates (one is expired) and different meta-data. edit: There are two scenarios that I see the certificate warning and one of them I can reliably repeat. When the user leaves his computer on over night Outlook pops the Security Warning window. I don't know what time this happens. Using Outlook Anywhere if I connect to Exchange externally via a cellular USB modem the Security Warning window will appear every time I close and reopen Outlook. Whether I say Yes or No does not make a difference on whether or not I can connect to Exchange and send/receive email. In other words, I can always connect to Exchange. I've checked my two Exchange servers and my Cisco router for a certificate that matches this one and I can't find it. edit 2: Here is a screenshot of the Security Alert window. (I've been calling it Security Warning... My mistake.) edit 3: I stopped seeing this error several weeks ago but I can't tie it to any single event (because I just sort of realized that warning had stopped showing up) but I think I found the source of the certificate. Last week I found out that the certificate on our website DomainA.com was invalid. I knew that our web admin had installed a valid certificate so when I look into the problem I found out I was being presented with the invalid certificate that this posting is in regards to. The Exchange server's domain is mail.DomainA.com so I can only guess that Outlook was passing this invalid certificate through as it did some kind of check on DomainA.com. This issue is still a mystery because the certificate warning stopped appearing several weeks ago whereas the invalid certificate issue on the website was only fixed last week. It ended up being a problem with the website control panel. The valid certificate was installed but not being served for some reason and instead the self-signed cert was being served.

    Read the article

  • Why do I sometimes get 'sh: $'\302\211 ... ': command not found' in xterm/sh?

    - by amn
    Sometimes when I simply type a valid command like 'find ...', or anything really, I get back the following, which is completely unexpected and confusing (... is command name I type): sh: $'\302\211...': command not found There is some corruption going on I think. I don't use color in my prompt, I am using the Bash shell in POSIX mode as sh (chsh to /bin/sh and so on - $SHELL is sh). What is going on and why does this keep happening? Anything I can debug? I think this is more of an xterm issue than sh, or at least a combination of the two. Files, for context: My /etc/profile, as distributed with Arch Linux x86-64: # /etc/profile #Set our umask umask 022 # Set our default path PATH="/usr/local/sbin:/usr/local/bin:/usr/bin" export PATH # Load profiles from /etc/profile.d if test -d /etc/profile.d/; then for profile in /etc/profile.d/*.sh; do test -r "$profile" && . "$profile" done unset profile fi # Source global bash config if test "$PS1" && test "$BASH" && test -r /etc/bash.bashrc; then . /etc/bash.bashrc fi # Termcap is outdated, old, and crusty, kill it. unset TERMCAP # Man is much better than us at figuring this out unset MANPATH My /etc/shrc, which I created as a way to have sh parse some file on startup, when non-login shell. This is achieved using ENV variable set in /etc/environment with the line ENV=/etc/shrc: PS1='\u@\H \w \$ ' alias ls='ls -F --color' alias grep='grep -i --color' [ -f ~/.shrc ] && . ~/.shrc My ~/.profile, I am launching X when logging in through first virtual tty: [[ -z $DISPLAY && $XDG_VTNR -eq 1 ]] && exec xinit -- -dpi 111 My ~/.xinitc, as you can see I am using the system as a Virtual Box guest: xrdb -merge ~/.Xresources VBoxClient-all awesome & exec xterm And finally, my ~/.Xresources, no fancy stuff here I guess: *faceName: Inconsolata *faceSize: 10 xterm*VT100*translations: #override <Btn1Up>: select-end(PRIMARY, CLIPBOARD, CUT_BUFFER0) xterm*colorBDMode: true xterm*colorBD: #ff8000 xterm*cursorColor: S_red Since ~/.profile references among other things /etc/bash.bashrc, here is its content: # # /etc/bash.bashrc # # If not running interactively, don't do anything [[ $- != *i* ]] && return PS1='[\u@\h \W]\$ ' PS2='> ' PS3='> ' PS4='+ ' case ${TERM} in xterm*|rxvt*|Eterm|aterm|kterm|gnome*) PROMPT_COMMAND=${PROMPT_COMMAND:+$PROMPT_COMMAND; }'printf "\033]0;%s@%s:%s\007" "${USER}" "${HOSTNAME%%.*}" "${PWD/#$HOME/~}"' ;; screen) PROMPT_COMMAND=${PROMPT_COMMAND:+$PROMPT_COMMAND; }'printf "\033_%s@%s:%s\033\\" "${USER}" "${HOSTNAME%%.*}" "${PWD/#$HOME/~}"' ;; esac [ -r /usr/share/bash-completion/bash_completion ] && . /usr/share/bash-completion/bash_completion I have no idea what that case statement does, by the way, it does look a bit suspicious though, but then again, who am I to know.

    Read the article

  • PHP 5.2 to 5.3 not upgrading, no errors

    - by Webnet
    I'm following this guide: http://atik97.wordpress.com/2010/06/12/how-to-upgrade-to-php-5-3-in-ubuntu-9-10/ I've done all the steps, but it's still showing php 5.2.6 - any ideas? I have also tried -cgi instead of -cli, neither have any effect. update I've tried rebooting the server to see if that would have any effect and unfortunately it didn't update Output of dpkg -l *php*: Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Cfg-files/Unpacked/Failed-cfg/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Hold/Reinst-required/X=both-problems (Status,Err: uppercase=bad) ||/ Name Version Description +++-=============================================-=============================================-========================================================================================================== un libapache2-mod-php4 <none> (no description available) ii libapache2-mod-php5 5.2.6.dfsg.1-3ubuntu4.6 server-side, HTML-embedded scripting language (Apache 2 module) un libapache2-mod-php5filter <none> (no description available) ii php-pear 5.2.6.dfsg.1-3ubuntu4.6 PEAR - PHP Extension and Application Repository un php4-cli <none> (no description available) un php4-dev <none> (no description available) un php4-mysql <none> (no description available) un php4-pear <none> (no description available) ii php5 5.2.6.dfsg.1-3ubuntu4.6 server-side, HTML-embedded scripting language (metapackage) ii php5-cgi 5.2.6.dfsg.1-3ubuntu4.6 server-side, HTML-embedded scripting language (CGI binary) ii php5-cli 5.2.6.dfsg.1-3ubuntu4.6 command-line interpreter for the php5 scripting language ii php5-common 5.2.6.dfsg.1-3ubuntu4.6 Common files for packages built from the php5 source ii php5-curl 5.2.6.dfsg.1-3ubuntu4.6 CURL module for php5 un php5-dev <none> (no description available) ii php5-gd 5.2.6.dfsg.1-3ubuntu4.6 GD module for php5 ii php5-imap 5.2.6-0ubuntu5.1 IMAP module for php5 un php5-json <none> (no description available) ii php5-mcrypt 5.2.6-0ubuntu2 MCrypt module for php5 ii php5-mysql 5.2.6.dfsg.1-3ubuntu4.6 MySQL module for php5 un php5-mysqli <none> (no description available) ii php5-xsl 5.2.6.dfsg.1-3ubuntu4.6 XSL module for php5 un phpapi-20060613+lfs <none> (no description available) ii phpmyadmin 4:3.1.2-1ubuntu0.2 MySQL web administration tool update The following commands and their outputs: grep php53 /etc/apt/sources.list deb http://php53.dotdeb.org stable all deb-src http://php53.dotdeb.org stable all apt-cache search -f "libapache2-mod-php5" http://pastebin.com/XNXdsXYC update I've updated the question with more details on installed packages.

    Read the article

  • Google Chrome and Firefox Rendering

    - by user13503
    When attempting to visit sites running javascript, Google Chrome and Firefox often fail to render them properly. For example, when hitting google.com/codesearch results, I just get a blank page. http://img340.imageshack.us/img340/8527/200910071518.png This rendering problem happens in both browsers on a machine running Windows XP Professional x64. Internet explorer is able to render the page just fine. Also, controls on other pages sometimes fail to respond, for example in Google Docs. Has anyone encountered this issue before in Chrome/Firefox? Do they share a common installation of the V8 javascript engine? I'm baffled by this issue. Here's the source code for the image above that Chrome loads when attempting to retrieve the page: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"><html><head><script>function a(c){this.t={};this.tick=function(d,e,b){var f=b?b:(new Date).getTime();this.t[d]=[f,e]};this.tick("start",null,c)}var g=new a;window.jstiming={Timer:a,load:g};try{window.jstiming.pt=window.external.pageT}catch(h){}; </script><title></title><script language="javascript" src="/codesearch/js/CachedFile/F1A2CB189D0FCB1FF201C42BF6A5447C.cache.js"></script></head><body><iframe src="javascript:''" id="__gwt_historyFrame" style="width:0;height:0;border:0"></iframe><script>if(window.jstiming){window.jstiming.a={};window.jstiming.c=1;function k(a,d,f){var b=a.t[d];if(!b)return undefined;b=a.t[d][0];if(f!=undefined)var h=f;else h=a.t.start[0];return b-h}window.jstiming.report=function(a,d,f){var b="";if(window.jstiming.pt){b+="&srt="+window.jstiming.pt;delete window.jstiming.pt}try{if(window.external&&window.external.tran)b+="&tran="+window.external.tran}catch(h){}if(a.b)b+="&"+a.b;var e=a.t,p=e.start,l=[],i=[];for(var c in e)if(!(c=="start"))if(!(c.indexOf("_")==0)){var j= e[c][1];if(j)e[j]&&i.push(c+"."+k(a,c,e[j][0]));else p&&l.push(c+"."+k(a,c))}delete e.start;if(d)for(var m in d)b+="&"+m+"="+d[m];var n=[f?f:"http://csi.gstatic.com/csi","?v=3","&s="+(window.jstiming.sn?window.jstiming.sn:"codesearch")+"&action=",a.name,i.length?"&it="+i.join(",")+b:b,"&rt=",l.join(",")].join(""),g=new Image,o=window.jstiming.c++;window.jstiming.a[o]=g;g.onload=g.onerror=function(){delete window.jstiming.a[o]};g.src=n;g=null;return n}}; </script></body></html> Thanks, Steve

    Read the article

  • linux disk usage report inconsistancy after removing file. cpanel inaccurate disk usage report

    - by brando
    relevant software: Red Hat Enterprise Linux Server release 6.3 (Santiago) cpanel installed 11.34.0 (build 7) background and problem: I was getting a disk usage warning (via cpanel) because /var seemed to be filling up on my server. The assumption would be that there was a log file growing too large and filling up the partition. I recently removed a large log file and changed my syslog config to rotate the log files more regularly. I removed something like /var/log/somefile and edited /etc/rsyslog.conf. This is the reason I was suspicious of the disk usage report warning issued by cpanel that I was getting because it didn't seem right. This is what df was reporting for the partitions: $ [/var]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 9.9G 511M 8.9G 6% / tmpfs 5.9G 0 5.9G 0% /dev/shm /dev/sda1 99M 53M 42M 56% /boot /dev/sda8 883G 384G 455G 46% /home /dev/sdb1 9.9G 151M 9.3G 2% /tmp /dev/sda3 9.9G 7.8G 1.6G 84% /usr /dev/sda5 9.9G 9.3G 108M 99% /var This is what du was reporting for /var mount point: $ [/var]# du -sh 528M . clearly something funky was going on. I had a similar kind of reporting inconsistency in the past and I restarted the server and df reporting seemed to be correct after that. I decided to reboot the server to see if the same thing would happpen. This is what df reports now: $ [~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 9.9G 511M 8.9G 6% / tmpfs 5.9G 0 5.9G 0% /dev/shm /dev/sda1 99M 53M 42M 56% /boot /dev/sda8 883G 384G 455G 46% /home /dev/sdb1 9.9G 151M 9.3G 2% /tmp /dev/sda3 9.9G 7.8G 1.6G 84% /usr /dev/sda5 9.9G 697M 8.7G 8% /var This looks more like what I'd expect to get. For consistency this is what du reports for /var: $ [/var]# du -sh 638M . question: This is a nuisance. I'm not sure where the disk usage reports issued by cpanel get their info but it clearly isn't correct. How can I avoid this inaccurate reporting in the future? It seems like df reporting wrong disk usage is a strong indicator of the source problem but I'm not sure. Is there a way to 'refresh' the filesystem somehow so that the df report is accurate without restarting the server? Any other ideas for resolving this issue?

    Read the article

  • How to use AD/GPO/Print Services to "push out" a new printer driver to replace a broken one? How did my server get a broken driver?

    - by Zac B
    Context: We have an AD/GPO-managed corporate network with a little over a hundred PCs running Windows 7 x64, and a few managed printers. Our Server2008R2 primary domain controller is configured as a print server for them all. Problem: After a recent windows update and restart (no printer driver updates were included) on the DC, a particular shared printer (Lexmark T650) has begin exhibiting some strange behavior. First, it prints a preceding and following blank page for almost every document, on jobs submitted by about half of client machines (no separator page is configured on the server or any of the clients I've seen). Second, whenever someone tries to access "Printing Preferences" on any client, they recieve the following error message (this happens everywhere, 100% of the time, and didn't happen before the update on the DC): Once they click "OK", the prefs screen appears (with no separator page selected) and everything seems fine. I'm not even sure if these two issues are related, but everyone seems affected by one or both of those issues. What I've Tried: I've been hesitant to un-deploy the problem printer, or remove it via GPO, as it's pretty heavily used. I've tried updating (via MS update and our internal WSUS server) client machines and the DC. No printer driver updates have appeared, and no number of updates or restarts on the server or the client seems to have achieved anything other than my boss getting grumpy that I'm bouncing the domain controller so often. I've tried deleting the drivers on the server, and re-installing them from the original source that has worked for the past year...no change. I've tried selecting "New Driver" for one of the shared printers on a client machine, running as domain admin, and pushed the latest driver found by MSupdate back up to the DC. This changed the version number of the driver recorded in the print server manager, but caused no change--on the client I pushed from, or on any other. The error still appears. Question: Why the heck is this happening? Obviously, I got a bad driver from somewhere, but how do I get rid of it? I don't know of any "roll back drivers" functionality for centrally managed print drivers like Windows offers for other devices. How would I a) get this issue resolved on a client, and b) push the fix to the other members of the domain?

    Read the article

  • General guidelines / workflow to convert or transfer video "professionally"?

    - by cloneman
    I'm an IT "professional" who sometimes has to deal with small video conversion / video cutting projects, and I'd like to learn "the right way" to do this. Every time I search Google, there's always a disaster for weird, low-maturity trialware, or random forums threads from 3-4 years ago indicating various antiquated method to do it. The big question is the following: What are the "general" guidelines and tools to transcode video into some efficient (lossless?) intermediary, for editing purposes, for the purpose of eventually re-encoding it after? It seems to me like even the simplest of formats and tasks are a disaster of endless trial & error, or expertise only known by hardened experts who have a swiss army kife of weird conversion tools that they use, almost as if mounting an attack against the project. Here are a few cases in point: Simple VOB files extracted from DVD footage can't be imported into Adobe Premiere directly. Virtualdub is an old software people keep recommending but doesn't seem to support newer formats. I don't even know how to tell with certainty which codecs a video has, and weather the image is interlaced or not, and what resolution and codecs I'm dealing with. Problems: Choosing a wrong interlace option which diminishes quality Choosing a wrong pixel aspect ratio (stretches the image) Choosing a wrong "project type" in Premiere causing footage to require scaling Being forced to use some weird program that will have any number of negative effects What I'm looking for: Books or "Real knowledge" on format conversions, recognized tools, etc. that aren't some random forum guides on how to deal with video formats. Workflow guidelines on identifying a format going from one format to another without problems as mentioned above. Documentation on what programs like Adobe Premiere can and can't do with regards to formats, so that I don't use a wrench as a hammer. TL;DR How should you convert or "prepare" a video file to ensure it will be supported by Premiere for editing? Is premiere a suitable program to handle cropping, encoding, or should other tools be used for this, when making a video montage from a variety of source formats? What are some good books to read that specifically deal with converting videos that use any number of codecs?

    Read the article

  • What is the difference between running a Windows service vs. running through shell?

    - by Zack
    I am trying to troubleshoot an issue on a Windows 2008 server where running attempting to connect to a "Timberline Data Source" ODBC driver crashes if the call is in a "service" context, but succeeds if the call is initiated manually in a Remote Desktop session. I have set the service to run as my user. I'm wondering if, all else being equal (user, machine, etc), are there any fundamental security/environment differences between running a process as a service vs manually? --- Implementation Details --- In case it is helpful for anyone, I had a system that started as an attempt to connect to a Timberline Database using ODBC and a Python CGI script called via IIS 7. The script itself works fine, however, as soon as I attempt to perform the ODBC connect function, the script crashes without throwing an exception. The script was able to connect fine when executed via command line. The same thing happened when using a C#/.net service, attempting to run via Apache, Windows Scheduler or even a 3rd party scheduling tool. With the last option (the 3rd party scheduling tool, pycron) I set the service up log in as my user and had the same issue (I confirmed via Task Manager that the process running user was, in fact, me). It just doesn't make sense to me why a service, which should be running as my user, appears to still be operating in a different security context or environment. Also, if it's important, the Timberline database is referenced by computer name on the network ("\\timberline-server\Timberline Office\Accounts\AT" or something to that effect) I also realized that, as Joel pointed out, the server DOES have a mapped drive ("Y:" which is mapped to "\\timberline-server\Timberline Office") The DSN is set up at the "System DSN" level which, according to the ODBC Administration Tool, means that the DSN is available to users and services Since I'm not allowed to answer this question yet, I'll post the solution that I arrived on: As Joel Coel mentioned, there actually was a mapped drive scenario. I didn't realize this because the DSN specified a path using UNC. However, it seems as though the actual Timberline Driver referred to a mapped drive. Since services don't start with the mapped drive, I was forced to add the drive mapping code into my service. Since it was written in python, I used code from a Stackoverflow answer that was able to map the drive on the fly.

    Read the article

  • Server to server replication and CPU and 32k\ corrupt doc

    - by nick wall
    Summary: if database contains a doc with 32K issue or corrupt, on server to server replication it causes marked increase in CPU in nserver.exe task, which effectively causes our server(s) to slow right down. We have a 5 server cluster (1 "hub" and 4 HTTP servers accessed via reverse proxy and SSO for load balancing and redundancy). All are physically located next to each other on network, they don't have dedicated network\ ports for cluster or replication. I realise IBM recommendation is dedicated port for cluster. Cluster queues are in tolerance and under heavy application user load, i.e. the maximum number of documents are being created, edited, deleted, the replication times between servers are negligible. Normally, all is well. Of the servers in the cluster, 1 is considered the "hub", and imitates a PUSH-PULL replication with it's cluster mates every 60mins, so that the replication load is taken by the hub and not cluster mates. The problem we have: every now and then we get a slow replication time from the hub to a cluster mate, sometimes up to 30mins. This maxes out the nserver.exe task on the "cluster mate" which causes it to respond to http requests very slowly. In the past, we have found that if a corrupt document is in the DB, it can have this affect, but on those occasions, the server log will show the corrupt doc noteId, we run fixup, all well. But we are not now seeing any record of corrupt docs. What we have noticed is if a doc with the 32K issue is present, the same thing can happen. Our only solution in that case is to run a : fixup mydb.nsf -V, which shows it is purging a 32K doc. Luckily we run a reverse proxy, so we can shut HTTP servers down without users noticing, but users do notice when a server has the problem! Has anyone else seen this occur? I have set up DDM event handlers for many of the replication events. I have set the replication time out limit to 5 mins (the max we usually see under full user load is 0.1min), to prevent it rep'ing for 30mins as before. This ia a temporary work around. Does anyone know of a DDM event to trap the 32K issue? we could at least then send alert. Regarding 32K issue: this prob needs another thread, but we are finding this relatively hard to find the source of the issue as the 32K event is fairly rare. Our app is fairly complex, interacting with various other external web services, with 2 way data transfer. But if we do encounter a 32K doc, we can't look at field properties, so we can't work out which field has issue which would give us a clue as to which process is culprit. As above, we run a fixup -V. Any help\ comments on this would be gratefully received.

    Read the article

  • Troubleshooting Network Speeds -- The Age Old Inquiry

    - by John K
    I'm looking for help with what I'm sure is an age old question. I've found myself in a situation of yearning to understand network throughput more clearly, but I can't seem to find information that makes it "click" We have a few servers distributed geographically, running various versions of Windows. Assuming we always use one host (a desktop) as the source, when copying data from that host to other servers across the country, we see a high variance in speed. In some cases, we can copy data at 12MB/s consistently, in others, we're seeing 0.8 MB/s. It should be noted, after testing 8 destinations, we always seem to be at either 0.6-0.8MB/s or 11-12 MB/s. In the building we're primarily concerned with, we have an OC-3 connection to our ISP. I know there are a lot of variables at play, but I guess I was hoping the experts here could help answer a few basic questions to help bolster my understanding. 1.) For older machines, running Windows XP, server 2003, etc, with a 100Mbps Ethernet card and 72 ms typical latency, does 0.8 MB/s sound at all reasonable? Or do you think that slow enough to indicate a problem? 2.) The classic "mathematical fastest speed" of "throughput = TCP window / latency," is, in our case, calculated to 0.8 MB/s (64Kb / 72 ms). My understanding is that is an upper bounds; that you would never expect to reach (due to overhead) let alone surpass that speed. In some cases though, we're seeing speeds of 12.3 MB/s. There are Steelhead accelerators scattered around the network, could those account for such a higher transfer rate? 3.) It's been suggested that the use SMB vs. SMB2 could explain the differences in speed. Indeed, as expected, packet captures show both being used depending on the OS versions in play, as we would expect. I understand what determines SMB2 being used or not, but I'm curious to know what kind of performance gain you can expect with SMB2. My problem simply seems to be a lack of experience, and more importantly, perspective, in terms of what are and are not reasonable network speeds. Could anyone help impart come context/perspective?

    Read the article

  • OpenVPN Chaining

    - by noderunner
    I'm trying to set up an OpenVPN "chain", similar to what is described here. I have two separate networks, A and B. Each network has an OpenVPN server using a standard "road warrior" or "client/server" approach. A client can connect to either one for access to the hosts/services on that respective network. But server A and B are also connected to each other. The servers on each network have a "site-to-site" connection between the two. What I'm trying to accomplish, is the ability to connect to network A as a client, and then make connections with hosts on network B. I'm using tun/routing for all of the VPN connections. The "chain" looks something like this: [Client] --- [Server A] --- [Server A] --- [Server B] --- [Server B] --- [Host B] (tun0) (tun0) (tun1) (tun0) (eth0) (eth0) The whole idea is that server A should route traffic destined to network B through the "site-to-site" VPN set up on tun1 when a client from tun0 tries to connect. I did this simply by setting up two connection profiles on server A. One profile is a standard server config running on tun0, defining a virtual client network, IP address pool, pushing routes, etc. The other is a client connection to Server B running on tun1. With ip_forwarding enabled, I then simply added a "push route" to the clients advertising a route to network B. On server A, this seems to work when I look at tcpdump output. If I connect as a client, and then ping a host on network B, I can see the traffic getting passed from tun0 to tun1 on Server A: tcpdump -nSi tun1 icmp The weird thing is that I don't see Server B receiving that traffic through the tunnel. It's as if Server A is sending it through the site-to-site connection like it should, but server B is completely ignoring it. When I look for the traffic on Server B, it simply isn't there. A ping from Server A -- Host B works fine. But a ping from a client connected to Server A to host B does not. I'm wondering if Server B is ignoring the traffic because the source IP does not match the client IP pool that it hands out to clients? Does anyone know if I need to do something on Server B in order for it to see the traffic? This is a complicated problem to explain, so thanks if you stuck with me this far.

    Read the article

  • How to find the process(es) which are hogging the machine

    - by Aaron Digulla
    Scenario: All of a sudden, my computer feels sluggish. Mouse moves but windows take ages to open, etc. uptime says the load is 7.69 and raising. What is the fastest way to find out which process(es) are the cause of the load? Now, "top" and similar tools isn't the answer because they either show CPU or memory usage but not both at the same time. What I need is the single command which I might be able to type as it happens - something that will figure out any of System is trying to swap 8GB of RAM to disk because process X ... or process X seeks all over the disk or process X uses 400% CPU" So what I'm looking for is iostat, htop/atop and similar tools run into one with an output like this: 1235 cp - Disk trashing 87 chrome - Uses 2&nbsp;GB of RAM 137 nfs_bench - Uses 95% of the network bandwidth I don't want a tool that gives me some numbers which I can analyze but a tool that tells me exactly which process causes the current load. Assume that the user in front of the keyboard barely knows how to write "process", but the user is quickly overwhelmed when it comes to "resident size", "virtual memory" or "process life cycle". My argument goes like this: A user notices a problem. There can be thousands of reasons ... well, almost :-) The user wants to know the source of the problem. The current solutions give me lots of numbers, and I need to know what these numbers mean. What I'm looking for is a meta tool. 99% of the data is irrelevant to the problem. So what the tool should do is look for processes which hog some resource and list only those along with "this process needs a lot of CPU, this produces many IRQs, this process allocates a lot of RAM (and it's still growing)". This will be a relatively short list. It will be much more simple for someone new to this to locate the culprit from this list than from the output of, say, htop which gives me about 5000 numbers but requires me to fold multi-threaded processes myself (I have 50 lines which say VIRT 2750M but only 16 GB of RAM - the machine ought to swap itself to death but of course, this is a misinterpretation of the data that can happen quickly).

    Read the article

  • Remote Debian System Preventing Logon

    - by choobablue
    I have a dozen or so single board computers on a network running Debian (squeeze) and access them via ssh (ssh server is dropbear). To give an idea of the hardware of these computers they're 1.2 GHz x86 processors, 1GB of RAM and 4GB flash drives formatted as ext2 (I avoided ext3 to prevent the added flash write stress from journaling), there is also a swap partition on the drive. Normally the setup I'm using works great and I can access all the computers. Every once in a while one will prevent access. What happens is I try to connect via ssh (putty) and it gives me the login prompt, I enter the username and password and it responds 'Access Denied' and it will also refuse any public key in ~/.ssh/authorized_keys. The credentials are correct as they worked previously. The computer responds to pings and putty recognizes the server public key, which implies to me the system is still running. Restarting the server fixes the problem and I can log in again. (I tried a temporary fix of putting shutdown -r now in the root crontab but this doesn't seem to reliably be run once the hang happens) Once I restart however there doesn't seem to be any information in any of the system logs to indicate what happened, the logs are simply empty for that time period, as if the system had crashed. There is some custom software running on the system which appears to stop working (which is why I wanted to ssh to begin with). I'm assuming that this program is the source of the problems but I'm unsure of how it would cause it and how to debug what is happening. The most likely explanation I can think of is that there is a memory leak in the other program that then prevents dropbear from spawning a new login shell (and crontab from executing shutdown) as there is not enough free memory. But looking at memory usage of the other (working) computers there doesn't seem to be any meaningful increase in memory to indicate a leak (unless it's a very big, fast acting and rare leak). I would think that when the OS ran out of memory it would restart the system or kill processes (the Linux kernel restarts right?). The other thing I wonder about is if the fact that they are running off a flash drive could have some effect, especially the swap partition (which I think I should remove to prevent wear of the flash), but the flash drives are young (~1 month) and I don't think that wear would be a factor yet. Does anybody have an idea of what could cause these symptoms, if it could be done by a memory leak, or something else I haven't thought of. And does anybody know of a method to try to debug the problem and find out more information about what's going wrong?

    Read the article

  • How to loop through all illustrator files in a folder (CS6)

    - by Julian
    I have written some JavaScript to save .ai files to two separate locations with different resolutions, one of them being cropped to a reduced size art board. (Courtesy of John Otterud / Articmill for the main part). There are other variables in the script that I am not using at present but I want to leave the functionality there for a later date/additional layers to export/other resolutions etc. I can't get it to loop through all files in a folder. I cannot find the script that works - or insert it at the right place. I can get as far a selecting the folder and I suppose creating an array but after that what next? This is the create array part of the script - // JavaScript Document //Set up vairaibles var destDoc, sourceDoc, sourceFolder, newLayer; // Select the source folder. sourceFolder = Folder.selectDialog('Select the folder with Illustrator files that you want to mere into one', '~'); destDoc = app.documents.add(); // If a valid folder is selected if (sourceFolder != null) { files = new Array(); // Get all files matching the pattern files = sourceFolder.getFiles(); I have inserted this at the beginning of the main script (probably where I am going wrong because I can select the folder but then nothing more) #target illustrator var docRef = app.activeDocument; with (docRef) { if (layers[i].name = 'HEADER') { layers[i].name = '#'+ activeDocument.name; save() } } // *** Export Layers as PNG files (in multiple resolutions) *** var subFolderName = "For_PLMA"; var subFolderTwoName = "For_VLP"; var saveInMultipleResolutions = true; // ... // Note: only use one character! var exportLayersStartingWith = "%"; var exportLayersWithArtboardClippingStartingWith = "#"; // ... var normalResolutionFileAppend = "_VLP"; var highResolutionFileAppend = "_PLMA"; // ... var normalResolutionScale = 100; var highResolutionScale = 200; var veryhighResolutionScale = 300; // *** Start of script *** var doc = app.activeDocument; // Make sure we have saved the document if (doc.path != "") { Then the rest of the export script runs on from there.

    Read the article

  • Asterisk: Dropping calls with an "ast_yyerror"

    - by Nick
    I'm having an intermittent issue where asterisk will play our greeting to the caller, and then drop the call instead of making our phones ring. I'm unable to reproduce the problem with any phones I have here, and many callers get through just fine. Some callers though, run into the problem, and I can't find any pattern to it. The bit of information I could find said it was caused by an error in evaluating a dialplan expression. I'm thinking it's this line: exten = START,n,GotoIf($[${FORCE_CLOSED}=TRUE]?CLOSED,1) But I'm not sure what's wrong with it. I see the following error on the console: [Apr 4 16:29:49] WARNING[27038]: ast_expr2.fl:459 ast_yyerror: ast_yyerror(): syntax error: syntax error, unexpected '=', expecting $end; Input:=TRUE^ Surrounding Console output: -- Executing [START@AGInbound:1] Answer("IAX2/AtlantaTeliax-10086", "") in new stack -- Executing [START@AGInbound:2] BackGround("IAX2/AtlantaTeliax-10086", 0000_AG_THANK_YOU_FOR_CALLING_AG") in new stack -- Playing '0000_AG_THANK_YOU_FOR_CALLING_AG.slin' (language 'en') [Apr 4 16:29:49] WARNING[27038]: ast_expr2.fl:459 ast_yyerror: ast_yyerror(): syntax error: syntax error, unexpected '=', expecting $end; Input: =TRUE ^ [Apr 4 16:29:49] WARNING[27038]: ast_expr2.fl:463 ast_yyerror: If you have questions, please refer to doc/tex/channelvariables.tex in the asterisk source. -- Executing [START@AGInbound:3] GotoIf("IAX2/AtlantaTeliax-10086", "?CLOSED,1") in new stack -- Executing [START@AGInbound:4] GotoIfTime("IAX2/AtlantaTeliax-10086", "9:30-17:0|mon-fri|*|*?OPEN,1") in new stack -- Executing [START@AGInbound:5] GotoIfTime("IAX2/AtlantaTeliax-10086", "10:0-18:30|sat|*|*?OPEN,1") in new stack -- Executing [START@AGInbound:6] GotoIfTime("IAX2/AtlantaTeliax-10086", "12:0-17:0|sun|*|*?OPEN,1") in new stack Relevant lines from the dial plan: exten = START,1,Answer() exten = START,n,Background(0000_AG_THANK_YOU_FOR_CALLING_AG) ; See if we're open ; Force Closed if no one's going to be answering exten = START,n,GotoIf($[${FORCE_CLOSED}=TRUE]?CLOSED,1) exten = START,n,GotoIfTime(${AG_WEEKDAY_OPEN_HOUR}:${AG_WEEKDAY_OPEN_MIN}-${AG$ exten = START,n,GotoIfTime(${AG_SATURDAY_OPEN_HOUR}:${AG_SATURDAY_OPEN_MIN}-${$ exten = START,n,GotoIfTime(${AG_SUNDAY_OPEN_HOUR}:${AG_SUNDAY_OPEN_MIN}-${AG_S$ ; ...and we're not. But maybe the time of day has been overridden? exten = START,n,GotoIf($[${OVERRIDE_TIME_OF_DAY}=TRUE]?OPEN,1) ; No override... We're definatly closed. exten = START,n,Goto(CLOSED,1) Any idea what's wrong with the expression? We recently upgraded from 1.4 to 1.6.

    Read the article

< Previous Page | 876 877 878 879 880 881 882 883 884 885 886 887  | Next Page >