Search Results

Search found 4458 results on 179 pages for 'individual improvement'.

Page 125/179 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • How to use Windows mini-dump files?

    - by ekaj
    I have a Mini-ITX Intel DH61AG mobo w/ an Intel i3 processor and 8GB of 1600MHz DDR3 RAM. Anyways, this computer has been crashing kind of frequently. It is not an OS problem, as I have used Ubuntu (and had kernel panics), Windows 7, and Windows 8 (BSODs aren't going to keep me from tinkering =p) Anyways, each of these OSes have had problems, so I ran a HDD check, and I know it is not a heat issue because I tested the processor for a few days when I first put the computer together. When I ran memtest86+, however, I got an error - so I did individual testing, and both chips came back good, did a really intense test with both of them again (took half a day), and no errors. So, I still think the problem could be RAM, but I am not sure - I tested it pretty extensively (might let it run all night again tonight)... which brings me to my point. Could someone explain to me (in simple terms if possible) how to READ the minidump files of Windows computers? I've tried before with a guide I found online, but failed miserably (can't remember guide, either =/). I'm fine with installing the software, I will probably need it sometime in the future as well. I have seen a few other posts on SU that just ask people to post minidump logs, but I feel as if that is too localized. Would someone be able to explain this? Note: If someone knows how to do this, but doesn't want to explain and is still willing to help me, this is the link for the minidump file =p Make sure to click

    Read the article

  • ESXi 4.0 Guests Locking up

    - by Brendan Sherwin
    I installed ESXi 4.0 on an HP Proliant g5 with a 64bit Xeon processor and took advantage of the free license as I work for a public school. I created two instances of server 2003 from scratch, one to be the DC, DHCP, the other to be a file server and DNS/DHCP backup. I had both guests up and running fine, setup my user accounts, transferred the data, etc etc. Once I joined a client machine to the domain, I would find that both of my Windows guests would lock up. Sometimes it would be for five or so minutes, once it was overnight. The "locked up" state means that as far I could tell, all services were stopped; dhcp no longer handed out IP's, DNS stopped working, I couldn't RDP into the server. The ESXi host, my HP server, was still running fine. VSphere was working, and I could look at the performance of the individual guests.I would try Powering off the hosts from inside VSPhere, and the hosts would start powering off, but get stuck at 95%, and stay that way, sometimes only for 10 minutes, others for hours. Several times I had to restart ESXi from it's console in order to restart my machines. Now, can anyone tell me what is happening, and how I can fix it, or take steps to prevent it? I hired a consultant to come take a look at it, someone who's experience and knowledge I trust, and he told me he had never seen anything like this ever before. He spoke to a friend of his who is VM certified, and he also said he had never heard of this issue. Thanks for your replies, and I'll do my best to respond ASAP. Currently, the server is powered off, and I've reinstituted my nine year old Server 2000 boxes, and I'm considering installing ESXi 3.5. Does anyone know a host created in 4.0 will work in 3.5? I'd really like to avoid having to rebuild those accounts! I know 4.0 works on this server, as I have another server in another school with the same exact hardware running 4.0 fine. Brendan

    Read the article

  • Set default MySQL connect charset for PHP (in RHEL)?

    - by Martijn Heemels
    We're running a hundred or so legacy PHP websites on an older server which runs Gentoo Linux. When these sites were built latin1 was still the common charset, both in PHP and MySQL. To make sure those older sites used latin1 by default, while still allowing newer sites to use utf8 (our current standard), we set the default connect charset in php.ini: mysql.connect_charset = latin1 mysqli.connect_charset = latin1 pdo_mysql.connect_charset = latin1 Specific more modern sites could override this in their bootstrapping code with: <?php mysql_set_charset("utf8", $dsn ); ...and all was well. Now the server is overloaded and we're no longer with that hoster, so we're moving all these sites to a faster server at our standard hoster, which uses RHEL 5 as their OS of choice. In setting up this new server I discover to my surprise that the *.connect_charset directives are a Gentoo specific patch to PHP, and RHEL's version of PHP doesn't recognize them! Now how do I set PHP to connect to MySQL with the latin1 charset? I thought about setting a default in my.cnf but would prefer not to force every app and client to default to latin1. Our policy is to use utf8, and we'd like to restrict the exception to PHP only. Also, converting every legacy site to properly use utf8 is not doable since many are of the touch 'm and you break 'm kind. We simply don't have the time to go fix them all. How would I set a default mysql/mysqli/pdo_mysql connection charset to latin1 for PHP, while still allowing individual scripts to override this to utf8 with mysql_set_charset()?

    Read the article

  • What's hogging my CPU?

    - by endolith
    Ubuntu's System Monitor applet shows 100% CPU usage continuously. If I click it, the resources tab shows it at 100% continuously, too. If I go to processes, though, to find out which process is the culprit, there is nothing above 10%. If I run top there is nothing above 10%. I try killing lots of things, but it continues at 100%. How can I find out what's hogging the CPU? This is an unusual situation on a computer I use daily, that normally only hits 100% CPU when I'm doing something that requires it (like loading 32 Firefox tabs) after which it goes back to a normal idle level. It's not a new install or anything. It shouldn't be maxed out. I'm not sure when it started or if I changed something that caused it to happen. Normally I would use top or System Monitor and find the process that had gone out of control, but I can't find anything with those tools this time. It persists after reboots and everything. And the processor is obviously hot, so it's not an erroneous reading. Update: I tried killing any process I saw active again, and killing vino-server finally fixed the problem, even though it never went above 5%. I had enabled Remote Desktop a few days ago (and have obviously now disabled it). How did it manage to use 100% CPU while top only showed it as 5% or so? How do I identify the culprit in the future? Looks like I'm not the only one: Still a problem in both jaunty & karmic. Interestingly, both System Monitor & htop do not show the sum of individual processes being anywhere near 100% cpu.

    Read the article

  • Alternatives to Splunk?

    - by MichaelGG
    I'm pretty impressed with Splunk, especially version 4. Pretty graphs, alerting (Enterprise only), and fast, accurate, searching. It's a great product. However, the cost just way too high to consider for full production use for our company. All we really need is to be able to index different logs in a central place, and have reasonable searching on that. Having alerts based on a saved search is also really nice. We don't really go beyond that. In fact, our biggest usage has been in deploying new applications. Everything gets logged via log4net to either the Event log on Windows or a text file on Linux. Splunk makes it pretty easy to quickly search across those to make sure all the parts of the app are working ok -- that's saved us tons of time versus hunting down individual logging sources. What alternatives exist in this market? I have a sinking feeling Splunk's pricing is so high because they have the best product by far, and they know it. We want the server to run on Windows. I'd be open to a split model, using one product for general logs (collect via syslog/Snare), and a dedicated product for our custom apps (like Log4Net Dashboard). Would using a simple syslog server such as Kiwi, sent to SQL Server (perhaps with fulltext enabled) work? I'd hope the cost should be well under 5 figures, USD. (And yes, I know, we're cheap. We're a startup with little money, and BizSpark takes care of all our MS licensing.) Edit: I should add, we have about 10 physical servers, 20 VMs, and a couple firewalls and switches. 90% is Windows.

    Read the article

  • How to setup Calendar permissions for group to group

    - by Sorean
    I've been scouring the internet and so far have only been able to find examples of how to grant calendar permissions from one user to another using the Add-MailboxFolderPermission command. This is great and it was okay for when they only had a handful of users. But going forward it's not realistic to have to set individual calendar permissions for all calendars for each new user. Layout of security groups already created. Each group has a few people assigned to it. Techs Managers Admin What I am trying to accomplish is set it up so that anyone that belongs to the Managers group can view the calendars of the Tech group. Admins can view and edit the Tech group. I've found an example of adding just the security group name but I get an error of: [PS] C:\Windows\system32add-MailboxFolderPermission -Identity Techs:\Calendar -User "Admin" -AccessRights Owner The user "Admin" is either not valid SMTP address, or there is no matching information. + CategoryInfo : NotSpecified: (0:Int32) [Add-MailboxFolderPermission], InvalidExternalUserIdException + FullyQualifiedErrorId : 39352699,Microsoft.Exchange.Management.StoreTasks.AddMailboxFolderPermission Am I creating groups wrong? Am I using the wrong commands? Any guidance would be greatly appreciated.

    Read the article

  • Missing Home Folder XP Clients 2008R2 Domain

    - by minamhere
    We just completed a migration from Server 2003 to Server 2008R2. Everything seems to have gone well except that many of our desktops have stopped mapping the Home Folder as set in Active Directory. Other mappings that are defined on individual clients are mapping just fine, these mappings are all on the same file server as the failing Home Folders. Half of the users are on 1 file server and half are on another. Users from both servers are having this problem. I have enabled the Group Policy setting to "Wait for network before logging in". I enabled the policy to "Run Logon Scripts synchronously". There are no errors on the Domain Controller or either File Server. When I enabled Group Policy Preferences as an attempted workaround, I get this error: The user 'V:' preference item in the '<Policy Name>' Group Policy object did not apply because it failed with error code '0x800708ca This network connection does not exist.' This error was suppressed. This seems to indicate that the network connection is not ready by the time Group Policy is processed. But isn't this the point of the "Wait before logging in" and "Run Logon scripts synchronously" settings? Some other background facts: The new Server 2008R2 installation is a Virtual Machine. It is on a new Subnet in a different building from the old server. DNS and DHCP were also migrated from the old DC to this new DC. These Home Folders were all working properly before the migration. Are there new security restrictions/policies in Server 2008R2 that might be causing this? Is there a way to check whether I have an underlying network connectivity issue? Maybe moving the server to the new building is causing a delay/timeout? Any thoughts or ideas on what could be causing this or how I can resolve this? Thanks.

    Read the article

  • Nginx all subdomain points to one subdomain (gitlab) rule

    - by Alkimake
    I have installed gitlab on my server and use nginx as http server... I simply used recipe for gitlab on nginx # GITLAB # Maintainer: @randx # App Version: 3.0 upstream gitlab { server unix:/home/gitlab/gitlab/tmp/sockets/gitlab.socket; } server { listen 192.168.250.81:80; # e.g., listen 192.168.1.1:80; server_name gitlab.xxx.com; # e.g., server_name source.example.com; root /home/gitlab/gitlab/public; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } # if a file, which is not found in the root folder is requested, # then the proxy pass the request to the upsteam (gitlab unicorn) location @gitlab { proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_redirect off; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://gitlab; } } gitlab.xxx.com works fine and i get gitlab web documents. But if i want another subdomain i use for Jira (jira.xxx.com) on port 80 (i setup jira on 8080 port normally) gets gitlab web site also. How can i restrict this rule only serving for gitlab, or may be i can redirect jira.xxx.com to jira.xxx.com:8080

    Read the article

  • Application losing Printer within Terminal Services for remote users

    - by Richard
    Question: What I need to do is have a permanent link to a printer, normally only accessible through Terminal Services (Printer Redirect), to allow Sage Line 50 layouts to see that printer persistently, even after users have disconnected and reconnected to the Terminal Services session? Although the printer is accessible each time a user connects to the Sage Server via Terminal Services, it is given a different session number and therefore the Sage Layout sees it as a different printer. History behind question: Users using Terminal Services connecting to a Sage Server on a different site Using Sage Line 50 v 15 on that Server Users want to print invoices (sage layouts) locally Sage Server cannot see the users local printers, to get around this user uses the Print redirect features of Terminal Services The individual reports can be edited to point to a specific printer by default. This means the user just has to select an invoice and click print, then select the layout/report wanted and it auto prints that invoice to the default printer specified. The problem occurs because the layouts are edited to point to the users local printer "Ricoh 1018d (session#)", note the "(session#)" as this is the users local printer being redirected through the terminal services session. Users are able to print using the sage layouts once the default printer is setup within the layout and saved, but as soon as the users disconnects from the Terminal Services session and then reconnect in the morning go to print, it has lost the connection to that printer. I understand why its failed, because that the printer is on a per session basis and the layout would not be able to hold on to the connection from a previous session. Thanks in advance for any assistance...

    Read the article

  • New Computer freezing up at random

    - by Benjamin Frost
    Since I built my system about 4 weeks ago I'v been getting random freezes, Sometimes it can happen directly after startup and sometimes it wont freeze up for 3-4 days of 24/7 running. It seems to be happening under all stress loads but mainly when the CPU is under 10% load. It doesn't give me a BSOD or anything, it simply just freezes and repeats the last sound before the freeze until I shut it down by the power button. I'v re-seated everything in the system except the CPU, Cleaned the RAM sockets and gold fittings. None of the components have been clocked above their factory settings as of yet, don't want to overclock them until I sort out these freezes. Temps are all well under the rated max temps, the highest the temps have been are below CPU: Low load: 16-21°C Full Load (100%) 40-43°C *(From HWMonitor by CPUID) GPU 1: Low Load: 25-30°C Full Load (100%): 45-50°C GPU 2: Low Load: 23-27°C Full Load (100%): 45-50°C *(GPU Temps from Catalyst Control Center) General Case temps Rear: 18-20°C Mid: 20-21°C Front (HDD/SSD Bays): 14-19°C (Case temps may be a little off as it's from the Kaze master pro fan controller) I have Un-installed EVERY driver for Motherboard, GPU & Soundcard and Re-installed twice. Windows is all up to date. To date i'v tried the following Running Memtest for 24 hrs straight, No errors Running Memtest on each individual RAM modules, No errors Reseated everything except the CPU Cleaned DIMM Sockets and Gold inputs Tested the Graphics cards 1 at a time Re-arranged all the SATA devices to run on Chipset controlled ports Re-installing all drivers OS: Win7 Professional 64bit Motherboard: ASRock X79 Extreme9 CPU: i7 3930k 3.2GHz GPU: Sapphire 7950 OC Edition V2 (2 card Crossfire) RAM: G.Skill Ripjaws Z F3-17000CL11Q-16GBZL 16GB (4x4GB) DDR3 Boot Drive: OCZ Agility 4 128GB Data Drive 1: Western Digital Black 2TB Data Drive 2: Western Digital Black 2TB Data Drive 3: Western Digital Green 3TB Power Supply: Corsair AX1200 Gold

    Read the article

  • How to direct reverse proxy requests using wildcard vhosts

    - by HonoredMule
    I'm interested in running a reverse proxy with 2-3 virtual machines behind it. Each internal server will run multiple virtual hosts, and rather than manually configuring each individual vhost on the proxy (a variety of vhosts come and go too often for this to be practical), I would like to use something which can employ pattern matching in a sequential order to find the appropriate back-end server. For example: Server 1: *.dev.mysite.com Server 2: *.stage.mysite.com Server 3: *.mysite.com, dev.mysite.com, stage.mysite.com, mysite.com Server 4: * In the above configuration, task.dev.mysite.com would go to Server 1, dev.mysite.com would go to Server 3, yoursite.stage.mysite.com to Server 2, www.mysite.com to Server 3, and yoursite.com to Server 4. I've looked into using Squid, Varnish, and nginx so far. I have my opinions regarding their respective desirability and general suitability, but it's not readily apparent if any of them can handle dynamic server selection in this manner and not require per-vhost configuration. Apache on the other hand can do this handily and simply, but otherwise (aside from being well-known and familiar) seems very poorly suited to the partly-performance-serving task. Performance isn't actually a major concern yet, but it seems foolish to use Apache if another system will perform far better and can also handle the desired 'hands-free' configuration. But so is frequently having to adjust the gateway for all production services and risk network-wide outage...and so also is setting oneself up for longer downtime later if Apache becomes a too-small bottleneck. Which of these (or other) reverse proxies can do it/would do it best? And maybe I should post this as a separate question, but if Apache is the only practical option, how safe/reliable/predictable is apache-mpm-event in apache2.2 (Ubuntu 12.04.1) particularly for a dedicated reverse proxy? As I understand it the Event MPM was declared "safe" as of 2.4 but it's unclear whether reaching stability in 2.4 has any implications for the older (2.2) versions available in official/stable package channels of various distros.

    Read the article

  • Variable directory names over SCP

    - by nedm
    We have a backup routine that previously ran from one disk to another on the same server, but have recently moved the source data to a remote server and are trying to replicate the job via scp. We need to run the script on the target server, and we've set up key-based scp (no username/password required) between the two servers. Using scp to copy specific files and directories works perfectly: scp -r -p -B [email protected]:/mnt/disk1/bsource/filename.txt /mnt/disk2/btarget/ However, our previous routine iterates through directories on the source disk to determine which files to copy, then runs them individually through gpg encryption. Is there any way to do this only by using scp? Again, this script needs to run from the target server, and the user the job runs under only has scp (no ssh) access to the target system. The old job would look something like this: #Change to source dir cd /mnt/disk1 #Create variable to store # directories named by date YYYYMMDD j="20000101/" #Iterate though directories in the current dir # to get the most recent folder name for i in $(ls -d */); do if [ "$j" \< "$i" ]; then j=${i%/*} fi done #Encrypt individual files from $j to target directory cd ./${j%%}/bsource/ for k in $(ls -p | grep -v /$); do sudo /usr/bin/gpg -e -r "Backup Key" --batch --no-tty -o "/mnt/disk2/btarget/$k.gpg" "$/mnt/disk1/$j/bsource/$k" done Can anyone suggest how to do this via scp from the target system? Thanks in advance.

    Read the article

  • Azure load-balancing strategy

    - by growse
    I'm currently building out a small web deployment using VM instances on MS Azure. The main problem I'm facing at the moment is trying to figure out how to get the load-balancing to detect if a particular VM has failed and not route traffic to that VM. As far as I can tell, there are only only two load-balancing options: Have multiple VMs (web01, web02, web03 etc.) within the same 'cloud service' behind a single VIP, and configure the endpoints to be load balanced. Create multiple 'cloud services', put a single web VM in each and create a traffic manager service across all these services. It appears that (1) is extremely simplistic and doesn't attempt to do any host failure detection. (2) appears to be much more varied, but requires me to put all my webservers in their own individual cloud service. Traffic manager appears to be much more directed at a geographic failover scenario, where you have multiple cloud services across different regions. This approach also has the disadvantage in that my web servers won't be able to communicate with my databases on internal IP addresses, unlike scenario (1). What's the best approach here?

    Read the article

  • How do I deny access to everybody but me in Windows 7?

    - by GregH
    I am trying to set up a file server on my my Windows 7 Pro system at home. I set up one common "Share" folder that I have shared/published. Within the share folder I want to have individual folders for me and my wife...that is only I can read/write my folder and only my wife can read/write to her folder and neither of us can read the contents of the other person's folder. Then I want to have a "public" folder where we can both read/write to contents of the folder as well as any sub-folders created, but my "kids" account can only read from this folder and sub folders. It seems really confusing to set up something like this and it really shouldn't. I am really confused between the "allow", "deny", and dimmed check boxes in the security tab. It seems that if I "Deny" access to "Everyone" on my private folder, then I don't even have access to it. Windows security seems backwards from the rest of the world's security models. If I am in two groups and I deny access to one of the groups but allow access to the other group then Windows security denies me access as I am in one of the groups that has access disallowed. Very confusing.

    Read the article

  • GlusterFS on VMWare ESXi 5

    - by Dharmavir
    I want to build network file system on top of my VMWare ESXi based virtual nodes which are running Ubuntu 12.04 LTS. I am evalaluating options and found that GlusterFS (http://www.gluster.org/) can turn out to be a good choice. Purpose: I have about 2 dozen VM nodes with different configurations, on 2 physical nodes which has following configuration: 16 core Intel Xeon 1 TB 48 GB RAM Now as I said earlier each Physical server has about 1TB hdd and I can increase if I want additional so for now I have 2TB disk space available, these space is distributed in VM nodes I have created on which about 2 dozen VM nodes live. Now some of them being application server and mgmt server, they have plenty of free disk space which I want to utilize for some heavy storage which I can not design if I do that individually on single VM node. This way if my storage is distributed between dozens of VM nodes and about 2 or more physical nodes I have some sort of backup as well. I do not mind if data gets stored redundently but per my knowledge it might hapeen that individual VM nodes will not be able to store all of the data because complete data size for example if we take 100GB will exceed VM disk size of 70GB and then VM will also have system and program files on it. I need some suggestion that will GlusterFS be the solution for which I am looking forward to or I should go with something like hadoop? I am not too sure. But yes, I would like to utilize my free space on each VM node and while doing that if I get store data redundently I am okay because it will give me data security.

    Read the article

  • Access Denied / Server 2008 / Home Directories

    - by Shaun Murphy
    Domain Controller: BDC01 (192.168.9.2) Storage Server: BrightonSAN1 (192.168.9.3) Domain: brighton.local Last night I moved our users home directories off of our Domain Controller onto a storage server using the MS FSMT. I'm getting a mixed bag of errors. The first being some users cannot logon properly, they can't access the logon.vbs in the sysvol folder on the DC and consequently cannot map their drives. I've narrowed that down to a DNS issue as we there was a remnant of our previous DNS server in the DHCP server options and scope options. I'm able to get their drives remapped by browsing to the sysvol folder by IP address as opposed to Computer Name and manually running the logon.vbs script. The other error I'm getting is Access Denied on a few of the users home directories. The top level folder (Home) is shared as normal and I've removed and re-added the NTFS security a number of times now including making the user the owner with full control. I've checked each and every individual file and folder in said users home directory and they are indeed the owner but I'm unable to write but I can read the contents. I'm stumped. This isn't happening to all clients. I'm considering removing their AD accounts, backing up their folders and readding them as a last resort but obviously I'd like to know why the above errors are happening.

    Read the article

  • How does the LeftHand SAN perform in a Production environment?

    - by Keith Sirmons
    Howdy, I previously asked this ServerFault question: Does anyone have experience with lefthands VSA SAN The general consensus looks like it does not perform well enough for a production SQL server even at a light load. So the new question is, How does LeftHand's SAN perform on the HP or Dell dedicated Hardware boxes? We are looking at the Starter SAN with 2 HP nodes in a 2-way replication, 2 ESX servers hosting a total of 2 Active Directory server, 1 MS SQL server, 1 File Server, and 1 General Purpose Server for things like Virus Scan (All Microsoft Server 2005 or 2008). The reason I am looking at LeftHand is for the complete software package. I plan to have a DR site and like how the SAN can perform an Async Replication to the offsite location without having to go back to the Vendor for more licenses. I also like the redundancy built into the Network Raid architecture. I have looked at other SANS and found different faults with them. For example, Dell's EqualLogic: Found that although the individual box is very redundant in hardware, the Data once spanned across multiple boxes is not redundant, if a node goes down you have lost the only copy of the data sitting on that hardware (One thing is certain, all hardware fails... When? is the only question.). I have used an XioTech SAN as well.. Well worth the money BTW, but I think it is overkill for the size of the office I am targeting. The cost to get the hardware redundancy in the XioTech makes it a little out of reach for the budget I am working in. Thank you, Keith

    Read the article

  • How can I set Thunderbird's "Recipient" column to display my email address rather than a friendly na

    - by Howiecamp
    After configuring a single, unified Inbox within Outlook 2007 to unify multiple email accounts, I found Thunderbird 3's Smart Folder feature. It works great, providing individual inboxes for each of your email accounts and a unified inbox which provides a unified, virtual view of those other inboxes. Thunderbird is smart enough so what when I reply to an email addressed to a specific email account, the reply is "From" that email address. In order to know which inbound email was to which of my accounts, I added the "Recipient" column to the inbox Smart Folder: What's displayed in the Recipient column depends on how the sender/sender's email client addresses the email. If they send it to just "[email protected]" without specifying a friendly name, the Recipient column displays "[email protected]" and there's no ambiguity about which account the email was sent to. However, if the sender has me in their address book (likely stored with a friendly name), it will be addressed as "Howard Camp [[email protected]]" and then show in the Recipient column as "Howard Camp". The problem is that if someone emails me with a friendly name at another of my email accounts (e.g. "Howard Camp [[email protected]]", the Recipient column will also display "Howard Camp" and I can't tell which account it's to until I open the message and/or look at the details. How can I configure Thunderbird to always display my email address rather than the sender-specified friendly name in the Recipient column?

    Read the article

  • PsExec and Remote Environment Variables, Logging, Etc.

    - by alharaka
    When I run PsExec on a remote computer, I always fall short of what I want. What I would like ideally in most situations is a) a log on an admin server where each individual log has the name of each the remote computer it was generated from (e.g. COMPNAME1.log, COMPNAME2.log, etc.) or b) a log file on each remote computer with whatever name I specify. When I try scenario (a), I use the following command. %SystemDrive%\path\to\psexec.exe @listofcomputers.txt -u DOMAIN\username cmd /c echo TEST >> \\server.company.tld\share\%computername%.log Problem is that it never works. All the computers just write to the log where %computername% is just the computer I execute PsExec from in my office. What I want are unique logs for each computer specific in the listofcomputers.txt that will correctly use the hostname from the remote environment variable without issue. Is that even possible? It does not seem to work for me. I tried this, and the syntax is clearly wrong. %SystemDrive%\path\to\psexec.exe @listofcomputers.txt -u DOMAIN\username "cmd /c echo TEST >> \\server.company.tld\share\%computername%.log" PsExec just fails saying the system file cannot be found (read: syntax fail). As for scenario (b), it appears to be a variation of a similar problem. When I run a command like this, it does not work. %SystemDrive%\path\to\psexec.exe @listofcomputers.txt -u DOMAIN\username "cmd /c echo %computername% >> \\server.company.tld\share\aggregated.log" Is there something I do not understand about remote path and environment variables with PsExec on the cmd.exe console (I have not even tried the dreaded PowerShell yet). I know such things work in a batch file (cmd /c \\server.company.tld\share\runthis.bat), but is there a reason it will not work when executing commands as arguments? I always need this, and can never get it!

    Read the article

  • Body of email breaks distribution list in exchange?

    - by widgisoft
    Hi, I have a very odd problem that I'm not sure is a programming issue or a server issue :-p. Basically I'm sending an email to an exchange distribution list that includes a PHP stack trace; during certain faults the trace includes really high level information such as the machine's environment variables (during file reads, etc.). I went through a copy of the email line by line until the email sent and it appears the line: [SUDO_COMMAND] => /etc/init.d/httpd restart is the culprit. Adding a string replacement in before the email is sent allows a successful send. What I don't understand is WHY these stream of characters are causing the issue ONLY on the distribution email. If I send the email to myself as well, i.e. "[email protected]; [email protected]", then I get the email fine. Re-ordering the list doesn't make a difference the group never gets the email. Because the individual gets the email and not the group I'm assuming the fault is with exchange and some rogue filtering - I've gone through it with the sysadmins and there's no filtering of any sort on that group... so maybe it's a bug? I can't find anyone else having recorded this specific fault so I figured I'd open it here. For now I'm just not using the distribution list but it'd be nice to eventually find the solution. Many thanks, Chris

    Read the article

  • Can gnome windows (in Ubuntu) be individually themed?

    - by Peter.O
    (Yes, "individually themed" is an oxymoron, but I still want it.) I've just moved from Windows to Ubuntu Linux, and although I am enjoying this new adventure playground, I've encountered something quite unexpected... and its bugging me. It is probably because there are so many gnome apps in Ubuntu (... how strange :) This is great. and the range of available apps is excellent! ...but because of this one-stop-shop scenario, most of the windows look remarkably similar. I'm certainly not after bling-maximus, but I would like a few differentiating colours for my not-so-young eyes to help identify an app at a glance (especially in the compiz lists) ... I love compiz :) I thought it was just a bit of razzle-dazzle, and that alone would be enough, but it is actually very practical to boot. So, basically, I'm wondering if there is a way to theme individual window components on a per app basis... especially the window decoration; menu bar, etc. Thanks.

    Read the article

  • How can I manage AWS VPC ssh access accounts and keys across multiple instances?

    - by deitch
    I am setting up a standard AWS VPC structure: a public subnet some private subnets, hosts on each, ELB, etc. Operational network access will be via either an ssh bastion host or an openvpn instance. Once on the network (bastion or openvpn), admins use ssh to access the individual instances. From what I can tell all of the docs seem to depend on a single user with sudo rights and a single public ssh key. But is that really best practice? Isn't it much better to have each user access each host under their own name? So I can deploy accounts and ssh public keys to each server, but that rapidly gets unmanageable. How do people recommend managing user accounts? I've looked at: IAM: It doesn't like like IAM has a method for automatically distributing accounts and ssh keys to VPC instances. IAM via LDAP: IAM doesn't have an LDAP API LDAP: set up my own LDAP servers (redundant, of course). Bit of a pain to manage, still better than managing on every host, especially as we grow. Shared ssh key: rely on the VPN/bastion to track user activities. I don't love it, but... What do people recommend? NOTE: I moved this over from accidentally posting in StackOverflow.

    Read the article

  • Web hosting for multiple web sites providing system isolation

    - by Justin
    We have a small number of projects where we expect the client will not be maintaining the installed versions of applications we install to power the site (such as Drupal). Given that an important part of security is keeping things updated, we don't want to host these projects on our Plesk-powered dedicated servers that currently host lots of our other client's websites. Our goal is to find a host where we can deploy isolated instances (be these slices, virtual servers, grid servers, etc) for each individual (or groups of 2-3) web sites as we need them. These instances would be completely separate, so that if one web site were hacked it would not impact any other site. Typical hosting requirements: Linux Apache PHP 5 MySQL Supports Drupal Ability to setup a cron task (but we don't need SSH access) Daily backups Virtualized/cloud hosting (we want to avoid shared) Pricing per site is around $25/month OS is patched automatically Some options we have considered but won't work: MediaTemple: Two major data center-wide security incidents and recent downtime foster doubt about this host's technical ability. Slicehost: This would require us to manage the entire server, which we don't want to do. Rackspace Cloud Sites (formerly Mosso): No backup options. Do you have any recommended hosting options for given these requirements?

    Read the article

  • Backup strategy for developer-focused Apple environments?

    - by ewwhite
    It's interesting to see the technological split between structured corporate environments and more developer-driven/startup environments. Some of the Microsoft technologies I take for granted (VSS, Folder Redirection, etc.) simply are not available when managing the increasing number of Apple laptops I see in DevOps shops. I'm interested in centralized and automated backup strategies for a group of 30-40 Apple laptops... How is this typically done safely and securely, assuming these are company-owned machines (versus BYOD)? While Apple has Time Machine, it's geared toward individual computer backups and doesn't seem to work reliably in a group setting. Another issue with these workstations is the presence of Vagrant/Virtual Box VMs on the developers' systems. Time Machine and virtual machines typically don't work well unless the VMs are excluded from the backup set. I'd like a push-based backup process with some flexible scheduling options. I know how to handle the backend storage, but I'm not sure on what needs to be presented to the client systems. Due to the nature of the data here, cloud-based backup may not be a viable option. Any suggestions about how you handle this in your environment would be appreciated. Edit: The virtual machine backups are no longer important. They can be excluded from the process and planning.

    Read the article

  • Defeating the RAID5 write hole with ZFS (but not RAID-Z) [closed]

    - by Michael Shick
    I'm setting up a long-term storage system for keeping personal backups and archives. I plan to have RAID5 starting with a relatively small array and adding devices over time to expand storage. I may also want to convert to RAID6 down the road when the array gets large. Linux md is a perfect fit for this use case since it allows both of the changes I want on a live array and performance isn't at all important. Low cost is also great. Now, I also want to defend against file corruption, so it looked like a RAID-Z1 would be a good fit, but evidently I would only be able to add additional RAID5 (RAID-Z1) sets at a time rather than individual drives. I want to be able to add drives one at a time, and I don't want to have to give up another device for parity with every expansion. So at this point, it looks like I'll be using a plain ZFS filesystem on top of an md RAID5 array. That brings me to my primary question: Will ZFS be able to correct or at least detect corruption resulting from the RAID5 write hole? Additionally, any other caveats or advice for such a set up is welcome. I'll probably be using Debian, but I'll definitely be using Linux since I'm familiar with it, so that means only as new a version of ZFS as is available for Linux (via ZFS-FUSE or so).

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >