Search Results

Search found 1046 results on 42 pages for 'forth'.

Page 25/42 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • File recovery from Mac results in random files and extensions – how do I get my data back?

    - by Robsta
    This Mac hard drive was dying. Someone I knew did a file recovery and got as many files as he could. The program (not sure how it was done, or what program it was) dished out a bunch of folders names such as: DIR56.TOC DIR55.CUR DIR54.GPZ DIR53.GZI … and so forth, all the way down to DIR0.LZH. Some of the file extensions I do understand — like .JPEG, or .MOV — but most of them are ones I've never heard of. I've googled some of them like .TOC, wich stands for "table of contents", but I don't understand how to transfer that data back to the Mac. Currently, they are on a Windows machine. They are being transfered onto an external hard drive that the Mac can read. It can also see all the files. However, the few that I tested to see if the Mac recognizes them (like .TOC and .CUR) cannot be opened. Anyone have any idea as to what I should do? There are some important assignments on there I need to get. EDIT: Data transfer was most likely done by: Easy Recover 6 professional (95% sure, no guarantee)

    Read the article

  • JBossMQ - Clustered Queues/NameNotFoundException: QueueConnectionFactory error

    - by mfarver
    I am trying to get an application working on a JBoss Cluster. It uses Queues internally, and the developer claims that it should work correctly in a clustered environment. I have jbossmq setup as a ha-singleton on the cluster. The application works correctly on whichever node currently is running the queue, but fails on the other nodes with a: "javax.naming.NameNotFoundException: QueueConnectionFactory not bound" error. I can look at JNDIview from the jmx-console and see that indeed the QueueConnectionFactory class only appears on the primary node in the Global context. Is there a way to see the Cluster's JNDI listing instead of each server? The steps I took from a default Jboss 4.2.3.GA installation were to use the "all" configuration. Then removed /server/all/deploy/hsqldb-ds.xml and /deploy-hasingleton/jms/hsqldb-jdbc2-service.xml, copying the example/jms/mysq-jdbc2-service.xml file into its place (editing that file to use DefaultDS instead of MySqlDS). Finally I created a mysql-ds.xml file in the deploy directory pointing "DefaultDS" at an empty database. I created a -services.xml file in the deploy directory with the queue definition. like the one below: <server> <mbean code="org.jboss.mq.server.jmx.Queue" name="jboss.mq.destination:service=Queue,name=myfirstqueue"> <depends optional-attribute-name="DestinationManager"> jboss.mq:service=DestinationManager </depends> </mbean> </server> All of the other cluster features of working, the servers list each other in the view, and sessions are replicating back and forth. The JBoss documentation is somewhat light in this area, is there another setting I might have missed? Or is this likely to be a code issue (is there different code to do a JNDI lookup in a clusted environment?) Thanks

    Read the article

  • NVidia ION and /dev/mapper/nvidia_... issues.

    - by Ritsaert Hornstra
    I have an NVidia ION board with 4 SATA ports and want to use that to run a Linux Server (CentOS 5.4). I first hooed up 3 HDs (that will be a RAID5 array) and a forth small boot HD. I first started to use the onboard RAID capability but that does not work correctly under Linux: the raid capacity is not a real RAID but uses lvm to define some arays. After setting the BIOS back to normal SATA mode and whiping the HDs, the first boot harddisk (/dev/sda) is seen as /dev/sda BEFORE mounting and after mounting as /dev/mapper/nvidia_. CentOS is unable to install on it (and grub is not installable on it either). So somehow the harddisk is still seen as if it belongs to some lvm volume. I tried to clean out the HD by issuing a few dd if=/dev/zero of=/dev/sda commands to wipe the starting cylinders and final cylinders but to no avail. Did anyone see this problem and did anyone find a solution? UPDATE When I create only a single ext3 partition on the first HD (/dev/mapper/nvidia_...) no LVM partitions are seen and I can boot from /dev/mapper/nvidia_.... Now the next step is to see how I can get rid of this folly.

    Read the article

  • How to generate customized sudoers files in puppet depending on the environment they're deployed to?

    - by gozu
    the sysadmins are present in the sudoers files of all environments, but other sudoers are not. Different environments all have slightly different sudoers. Most of the time, 90% of users are the same, and 10% vary so we cannot have only one sudoers file for everything. Right now, we are using puppet with 10 different files with names like sudoers.production1, sudoers.production2, sudoers.production3, sudoers.testing1, sudoers.staging1 and so forth. Puppet then picks the file to deploy based on the server's $domain (ex: dbserver.staging1.acme.com) or $hardwaremodel. It works fine but it's a nightmare to maintain so many files. I'd like to autogenerate sudoers files based on the server's domain and have only one big file with all the sudoers permissions for all users and all environments. Something that looks like: User_Alias ADMINS = abe, bob, carol, dave case $domain { "staging1.acme.com" { #add dev1,dev2,tester1,tester2 to sudoers file } "testing2.acme.com" { #add tester1, tester3, tester4 to sudoers file } What's the best way to go about this? Suggestions for alternatives are welcome. I'd appreciate any tips. Update 1: For security reasons, we'd rather not concatenate a bunch of files from a folder located on a puppet client in case someone puts a file in there (maliciously or not) and either breaks the combined file or inserts something in it. Most importantly, for usability, we'd like to keep the number of sudoers related files (fragment or complete) on puppet server to either 3 (prod/stage/test) or preferably 1 file. this file would (somehow) generate sudoers files on the puppet server and send one customized file to each puppet client. The purpose of this would be only searching for a username in a single file and removing it quicker than doing it on 11 files. When adding a user to a bunch of environments, it won't be as quick, but only one file would need to be opened and looked at, greatly reducing the chances of an omission. our Sudo version is 1.6.9p8 so we can't use /sudoers.d folder, only a sudoers file.

    Read the article

  • Setting up a PC for the Kids

    - by Martin Clarke
    I recently finished building a new PC from scratch; and then I decided to treat myself to a new widescreen monitor. I'm left with a bit of a conundrum with what to do with my old box. I'm considering a few options such as a file server, putting Linux on it, putting it elsewhere in the house or giving it to a member of the family and so on. But to be honest, I don't really think it would get much use. I've started thinking about putting together something for my kids. The oldest is coming up on 4 in a couple of months and he's used my PC and Macbook (supervised!) before for playing jigzaw puzzles, babysmash and so forth. He's also uses the computer at his nursery (Kindergarden for North Americans!). So, its got me thinking about setting something up for him (bonus for his brother who is 2). I was wondering what others had done when trying to put together something for their kids? Some points for consideration: Operating System? Software? Anti-virus Internet (probably blocking?) Hardware (I've seen some keyboards designed with kids in mind)

    Read the article

  • Making always-on-top windows follow the same MRU order as other windows

    - by nitro2k01
    Note: I'm using Windows 7 with the classical alt-tab style, ie the registry key AltTabSettings set to 1. I want to use MRU (most recently used) ordering of windows in the alt-tab list. However, because the windows are ordered in the Z order of the windows rather than actual MRU, this sometimes gives a different order after switching from an always-on-top application. Example: I have applications A, B and C open. A is set to always-on-top while the others aren't. A is focused. I now press alt-tab and application B is focused. I now press alt-tab but instead of application A receiving focus, application C does. Since A has a higher Z order, it's now left of application B, despite being the most recently used, and application C is placed right of B and is the one first getting focus by the cursor. To switch to application A, I need to press shift+alt-tab or cycle through all the other open windows. This is annoying when flicking focus back and forth between an always-on-top application and one that isn't always-on-top. Is there a way to make the alt-tab ordering strictly MRU?

    Read the article

  • client flips between internal and external IP addresses??

    - by jmiller-miramontes
    I have what seems like a not-particularly-complicated home network, all things considered: a DSL line comes in to a modem/router, which goes off to a switch, which supports a bunch of machines. My machines live in a 192.168.0.x address space; however, I'm running some public servers on the network, so I have a block of 8 (5, really) static IP addresses that are mapped to the servers by the router. The non-servers get 192.168.0.x addresses via NAT; some machines have static addresses and some get addresses from DHCP. Locally, I'm running a DNS server (named) to map between the domain names and the 192.168 address space. Somewhat messy, but everything basically works. Except: One of my local non-server clients occasionally switches from its internal address to its external address. That is, if I check the logs of a website I'm running internally, the hits coming from this client sometimes show up with the internal 192.168 address, and sometimes with the external (216.103...) address. It will flip back and forth for no apparent reason, without my doing anything. This can be a problem in terms of how the clients interact with the way I have some of the clients' SSH systems configured (e.g., allowing access from the internal network but not the external network), but it also Just Seems Wrong. I will confess that I'm kinda skating on the very edge of my networking competence here, but I can't for the life of me figure out what's going on. If it helps, the client in question is running Mac OS X / 10.6; its address is statically assigned, is not one of the five externally-accessible addresses, and gets its DNS from (first) the internal DNS server and (second) my ISP's DNS servers. I can't swear that none of the other NAT clients are also showing this problem; the one I'm dealing with is my everyday machine, so this is where I run into it. Does anybody out there have any advice? This is driving me crazy...

    Read the article

  • Keynote presentation in conjunction with other app?

    - by Sören Kuklau
    The short version: I'd like to tell Apple Keynote to switch to a specific app (never leaving full-screen mode) before a certain slide appears, then display that slide as soon as I switch back. Some more details: I'm going to show off five major improvements in an upcoming release of our app. I want one slide highlighting the feature, then one or two showing some details, perhaps with screenshots. After that, so people get a better impression of what I'm talking about, I'll show it off live — to do this, I have to switch to a VM or remote session (since this is a Windows app). Then, I'd like to switch back and go to the next feature. I.e., it would be similar to Apple's "Demo" slides in a cursive font, except not with a different screen, or different computer. It's this switching back and forth that I want to make smooth, just to wow the audience a little. Can I perhaps do a "special" slide in Keynote that tells it to run an AppleScript? Better yet, to switch to an app, and wait until I switch back, and then automatically advance to the next slide?

    Read the article

  • OS X random, jumpy dragging behavior

    - by Chris
    This problem has been going on since I got 10.7, and is persisting or even getting worse since I updated to Mavricks. The best way to describe it is thru an example- Let's say I'm working in Safari. Everything is working fine. Then, I'll switch to another app by clicking on it's window, for example, I'll click on Messages to shoot back a reply. When I click back to Safari, keyboard and mouse inputs don't work! The only way I can regain control of the app (Safari) is by clicking on the apps top bar, which then causes the app to jump left or right. If I'm in Messages and I switch back to Safari by clicking said top bar, it works fine. Same with switching back and forth using command-tab. I've narrowed the problem down to this - the first app (Safari, in this example) is, for some reason, deciding that I'm in the process of dragging the window around. This could be just an oddly persistent glitch in my system, but has anyone else seen this before? Perhaps a misplaced default write... somewhere along the line? Update A PRAM reset did absolutely nothing

    Read the article

  • Different versions of iperf for windows give totally different results

    - by Albert Mata
    Measuring TCP output from a Windows client to Solaris server: WXP SP3 with iperf 1.7.0 -- returns an average around 90Mbit Same client, same server but iperf 2.0.5 for windows -- returns an average of 8.5 Mbit Similar discrepancies have been observed connecting to other servers (W2008, W2003) It's difficult to get to some conclusions when different versions of the same tool provide vastly different results. Example below: C:\tempiperf -v (from iperf.fr) iperf version 2.0.5 (08 Jul 2010) pthreads C:\tempiperf -c solaris10 Client connecting to solaris10, TCP port 5001 TCP window size: 64.0 KByte (default) [ 3] local 10.172.181.159 port 2124 connected with 10.172.180.209 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.2 sec 10.6 MBytes 8.74 Mbits/sec Abysmal perfomance, but now I test from the same host (Windows XP SP3 32bit and 100Mbit) to the same server (Solaris 10/sparc 64bit and 1Gbit running iperf 2.0.5 with default window of 48k) with the old iperf C:\temp1iperf -v iperf version 1.7.0 (13 Mar 2003) win32 threads C:\temp1iperf.exe -c solaris10 -w64k Client connecting to solaris10, TCP port 5001 TCP window size: 64.0 KByte [1208] local 10.172.181.159 port 2128 connected with 10.172.180.209 port 5001 [ ID] Interval Transfer Bandwidth [1208] 0.0-10.0 sec 112 MBytes 94.0 Mbits/sec So one iperf with a 64k window says 8.75Mbit and the old iperf with the same window size says 94.0Mbit. These results are constant through repeated tests. From my testing launching iperf(old) with window size "x" and iperf(new) with window size "x" instead of producing the same or very close results produce totally different results. The only difference I see is the old compiled as win32 threads vs. pthreads but parallelism (-P 10) appears to work in both. Anyone has a clue or can recommend a tool that gives results I can trust?? EDIT: Looking at traces from (old) iperf it sets the TCP Window Scale flag to 3 in the SYN packet, when I run the (new) iperf this is set to 0 in the initial packet. A quick analysis of the window size through the exchange shows the (old) iperf moving back and forth but mostly at 32k while the (new) iperf mostly keeps at 64k. Maybe it will help somebody to connect the dots.

    Read the article

  • How do I get netcat to accept connections from outside the LAN?

    - by Chris
    I'm using netcat as a backend to shovel data back and forth for a program I'm making. I tested my program on the local network, and once it worked I thought it would be a matter of simply forwarding a port from my router to have my program work over the internet. Alas! This seems not to be the case. If I start netcat listening on port 6666 with: nc -vv -l -p 6666, then go to 127.0.0.1:6666 in a browser, as expected I see a HTTP GET request come through netcat (and my browser sits waiting in vain). If I go to my.external.ip.address:6666, however, nothing comes through at all and the browser displays 'could not connect to my.external.ip.address:6666'. I know that the port is correctly forwarded, as www.canyouseeme.org says port 6666 is open (and when netcat is not listening, that its closed). If I run netcat with -g my.adslmodem's.local.address to set the gateway address, I get the same behavior. Am I using this command line option correctly? Any insight as to what I'm doing wrong?

    Read the article

  • How can I get an AdWords ad to show up for a specific term ASAP?

    - by Eric
    I have a very specific situation... I have a client who has a site, backed by a celebrity, selling a comment product.... so imagine my site is all about "Martha Stewart used cars" (that's not it-- but you get the idea). My client wants to see their site show up ASAP in Google search results. While I'm waiting for organic search to kick in and recognize my site, index it properly, etc, I want to buy some adwords for keywords like "Martha Stewart used cars" and "Martha Stewart used car" and so forth and have the ads show up on the 1st page of search results. I've done this. The problem is that many, many other advertisers have set up ads on the keyword "used cars" so my Martha-specific ads never are shown. Even when I bid specifically on the keyword phrase "Martha Stewart used cars" and I enter that directly into Google, it doesn't show my ad. SO MY QUESTION.... how/what can I do to get my ads to show... or really, can I do anything else to get my client's site to show on the result page? (I'm not interested in anything black-hat or illegal; I'm just trying to throw some resources at this situation so when those folks looking SPECIFICALLY for "Martha Stewart used cars" will get to the site quickly.) thanks-- Eric

    Read the article

  • Iterating through folders and files in batch file?

    - by Will Marcouiller
    Here's my situation. A project has as objective to migrate some attachments to another system. These attachments will be located to a parent folder, let's say "Folder 0" (see this question's diagram for better understanding), and they will be zipped/compressed. I want my batch script to be called like so: BatchScript.bat "c:\temp\usd\Folder 0" I'm using 7za.exe as the command line extraction tool. What I want my batch script to do is to iterate through the "Folder 0"'s subfolders, and extract all of the containing ZIP files into their respective folder. It is obligatory that the files extracted are in the same folder as their respective ZIP files. So, files contained in "File 1.zip" are needed in "Folder 1" and so forth. I have read about the FOR...DO command on Windows XP Professional Product Documentation - Using Batch Files. Here's my script: @ECHO OFF FOR /D %folder IN (%%rootFolderCmdLnParam) DO FOR %zippedFile IN (*.zip) DO 7za.exe e %zippedFile I guess that I would also need to change the actual directory before calling 7za.exe e %zippedFile for file extraction, but I can't figure out how in this batch file (through I know how in command line, and even if I know it is the same instruction "cd"). Anyone's help is gratefully appreciated.

    Read the article

  • HTML tabindex: Put some links last without complete enumeration

    - by Emanuel Berg
    I know I can use the HTML anchor attribute tabindex to set the tabindex of links, i.e., in what order they get focused when the user hits Tab (or Shift-Tab). But, I have a home page with tons of links, and to enumerate all those is a lot of work. The actual case is, I have four image links that by default gets index 1, 2, 3, and 4 (well, the behavior is equivalent, at least). But, I'd much rather have the first non-image link as number 1. Check it out here and you'll understand immediately. I tried to give the first non-image link (the link I desire to have tabindex 1) - I tried to give it tabindex 1 explicitly, hoping that it would cascade from there, but it didn't (i.e., the first image link got implicit tabindex 2). I also tried to give the image links ridiculously high tabindexes, but that didn't work: as the other links didn't have tabindexes at all, those highs were still "first". As a last resort (the solution currently employed) I gave the image links all tabindex -1. That makes for logical tabbing, but, it is suboptimal, as those image links are excluded from the tab loop - a user tabbing away will probably never realize that the images are clickable. I'd like them to be reachable with tabbing, but last, after all the ordinary links. If you wonder why I'm so determined to achieve this, it has to do with my own finger habits: I almost exclusively search for links, tab back, tab forth, etc., and very seldom using the mouse. Note: I'll accept a script to change the actual HTML for a complete enumeration, if you convince me there is no "set" way to solve this problem.

    Read the article

  • Exchange 2010 Recovery: Mailbox not found using Restore-Mailbox

    - by user146665
    Exchange 2010 SP1 Update Rollup 5 server information store database was restored to a Recovery Database using EMC Networker successfully. The Recovery Database is in a mounted state with mailboxes listed within in it. However, when restoring the mailbox content using the following command: Restore-Mailbox –Identity MYMAILBOX –RecoveryDatabase MYRECOVERYDB –RecoveryMailbox LOSTMAILBOX –TargetFolder FOLDERFORLOSTMAILBOX Returns the following error: Mailbox "LOSTMAILBOX" doesn't exist on database "MYRECOVERYDB". + CategoryInfo : NotSpecified: (0:Int32) [Restore-Mailbox], ManagementObjectNotFoundException + FullyQualifiedErrorId : 66265C53,Microsoft.Exchange.Management.RecipientTasks.RestoreMailbox Note: I've used the correct alias name for the mailbox name; i've also tried combinations such as first name, or last name or both and so forth. Issuing a Get-MailboxStatistics -Database MYRECOVERYDB to see if the mailbox is there and it is as shown below: DisplayName ItemCount StorageLimitStatus LOSTMAILBOX 39495 MailboxDisabled Note: The StorageLimitStatus shows a strange output of MaibloxDisabled. Perhaps this may be the culprit. Going by the article's documentation I cannot complete the restore of the mailbox as I'm stuck at the restore-mailbox error that it cannot be found. Please advise & Thank you! Source of article: http://www.testlabs.se/blog/2012/07/05/exchange-2010-restore-to-recovery-database-using-emc-networker/

    Read the article

  • How to get gigabit network speeds on Windows XP?

    - by JB
    We've just installed gigabit switches at work, and things on the Linux side are going well. Our linux boxes, which use a Intel Corporation 82566DM-2 Gigabit nic (according to lspci), consistently get over 900 mbits/sec: iperf -c ipserver ------------------------------------------------------------ Client connecting to ipserver, TCP port 5001 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.40.9 port 39823 connected with 192.168.1.115 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.08 GBytes 929 Mbits/sec We have a bunch of Windows XP 64-bit machines that use Broadcom NetXtreme 57xx cards. I spent around a day trying to get equivalent speeds on them, but couldn't get above 200 Mbits/sec. I noticed the Windows iperf tests said that the TCP window size was 8 Kb by default (as opposed to 16 Kb on Linux, so I modified my test to reflect that. Still no love. I went to Broadcom's site, downloaded the latest drivers for the card and installed. Still no love. However, finally, I tried a 64 Kb window size with the new drivers, and finally an improvement! $ iperf -c ipserver -w64k ------------------------------------------------------------ Client connecting to ipserver, TCP port 5001 TCP window size: 64.0 KByte ------------------------------------------------------------ [ 3] local 192.168.40.214 port 1848 connected with 192.168.1.115 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 933 MBytes 782 Mbits/sec Much better, but still not really taking advantage of the full capabilities of the network. If the Linux box can reach 950 Mbits/sec consistently, this box should be able to as well. Also, if you're wondering about the medium, this is over the same cable...I'm switching back and forth. Any suggestion or ideas would be really welcome. Thanks!

    Read the article

  • Bad sectors, S.M.A.R.T., SpinRite, firmware on platter and drive id questions.

    - by Christopher Galpin
    Is it possible for S.M.A.R.T. to give false readings (say I was fiddling with lots of recovery programs, transfers, so on and so forth) or is it absolutely a read-only direct correlation to the physical status of a drive? Does SpinRite level 5 "recover bad sectors" operate on those marked at the factory? Are they on the same level as your generic bad sector, with SpinRite thus having full access? (Also I'm curious if SMART's bad sector count is zero'd afterward or if it includes factory marked sectors.) The main firmware of some drives, like a WD Passport is stored on the platter. How is it protected? Is it through marking them as bad sectors? If so, I'm wondering if SpinRite's sector recovery could bring about firmware corruption on these drives. Is the failure of a drive to report valid identity information (hdparm -I /dev/xx) consistent with corrupted firmware, or just general disk failure? I may be misunderstanding the role of firmware here. I feel I've read a drive's identity information is on the platter, just like the partition tables and so on. Is this true? (Apologizes if this is more appropriate for SuperUser.)

    Read the article

  • Will just a couple of thermal "trip" shutdowns typically damage a CPU?

    - by T.J. Crowder
    The short version If a CPU gets so hot that the system turns itself off because of a thermal trip signal just a couple of times, is it likely that the CPU will be damaged? Or does the trip do its job, turning it off before the CPU gets damaged? (This is with all default settings in the BIOS; I haven't raised any temp thresholds or overclocked anything.) The longer version I just got this Intel Atom D510-based fanless system, installed a 2.5" mobile SATA drive and two 2GB PC2-6400s, closed it up, and having checked everything was recognized in the BIOS, set about installing Ubuntu. After a couple of false starts related, I think, to the external DVD drive I was using, I got the install happily running along. About three-fourths or so of the way through the install, having been running less than an hour, the machine turned itself off. I was actually out of the room at the time, but when I came back and turned it back on, it said it had shut down due to a thermal event. I went into the BIOS and saw that (at that point, having just been turned back on after a couple of minutes off), it was running 87C. As near as I can tell from Intel's docs (PDF here), the max "junction" temperature for the CPU is 100C and it will raise a THERMTRIP signal at 125C. Yowsa. Presumably there will be some back-and-forth with the vendor on this, I'm just wondering whether letting it get that hot a couple of times is likely to end up damaging it.

    Read the article

  • The bottlenecks of any computer, what to look for?

    - by WebDevHobo
    Whether it is a laptop or a desktop, any computer is made up of several pieces of hardware that communicate with each other. Sending data back and forth to ensure that the user gets the desired results. I have seen some theoretical stuff on computers & hardware, but I wonder how it all comes together. CPU RAM Graphics Card L1 CACHE L2 CACHE L3 CACHE FSB ... And all other things. Which is the biggest bottle neck? Why would a person not want/need a big value in one of those categories in certain situations? P.S.: when reading the specs of the i5 750 processor, I came across this description: In place of the FSB, one or more high speed, point-to-point buses called Quick Path Interconnect (QPI) are used, formerly known as Common Serial Interconnect Bus or CSI. QPI features higher bandwidth than the traditional FSB and is better suited to system scaling. What is this, and how does it compare to FSB? EDIT: I am not planning to buy a computer at all. The goal of this question is to understand the internal relation of various hardware pieces, their specific functions and how they work together. For instance, I have heard to a somewhat higher-than-usual amount of L2/L3 Cache can help speed up your computer. What's up with saying that? Also I forgot to mention Hard-disk RPM.

    Read the article

  • Malware Defense Shows Up in PlayOn Settings/Logs Although System Has Been Thoroughly Cleaned

    - by nicorellius
    I was hit really hard by some nasty malware: Malware Defense. I was doing something I should not have been doing when I got it (surfing Pirate Bay for TV shows). It locked up my system and I had to reboot in safe mode. I was able to shut down the process and remove it using a malware killer tool. I then installed, after my machine was cleaned up a bit, Clamwin, Malwarebytes, and another AV tool. I cleaned the heck out of my system. Simultaneously, while this was going on, I was having trouble with my media-server, PlayOn. This tool is great, but has some bugs. One in particular is that it will not function well with AV software running. I found a way to allow the new AV software to run while using PlayOn, but it still says I have Malware Defense on. Firstly, Malware Defense is long gone. I cleaned all remnants from my registry and scoured my system with the above tools multiple times. PlayOn is getting some information that I have this crap installed on my system, but it's not. The system runs OK, but not optimally. I have a feeling it is causing my streaming to be interrupted sometimes. How is it that I can't even find Malware Defense on my system if I tried but yet somehow PlayOn is getting a finger print of it somewhere? I have gone back and forth with MediaMall to no avail. I kind of just gave up, because the streaming works OK. BTW, I also uninstalled/reinstalled PlayOn several times, reverted back to previous versions, etc. The only thing I haven't done is reformat my disk and reinstall Windows. I really don't want to do this if there is another way to remove this little print. Any ideas?

    Read the article

  • DNS issue for internal website routing internet connection from remote location

    - by Michael Paul
    I have an issue that I could use some help with. Our company has a main location and a remote location. Previously, the remote location was connected to the main location through an internet connection VPN tunnel. The connection was pitifully slow at 1.5Mbps, so we upgraded it with a 75Mbps direct link. That meant the remote location lost it's internet access, so we routed their access through the main office internet connection. Everything works perfect except for one thing. The website we host is not accessible from the remote location unless the IP address is used. If I do NSLOOKUP on our website address from a machine connected to the main location network, it resolves correctly to the inside IP address. However, if I do the same from a remote location machine, it resolves to the website's outside IP address. Our internal DNS server(s) have a pointer and CNAME records set up, and everything was working perfectly before the connection was upgraded. In addition, the remote location has a domain controller, DNS server and DHCP server to service these requests at the remote location and prevent these requests from getting routed back and forth over the link. So I think was it happening is that for some reason the DNS server at the remote location is not resolving our website name correctly and passing the requests on to the routers, which then push the request out to the internet DNS system. That resolves the name to our external IP. This is purely a DNS issue, everything else works just fine. I am just stumped on this one. Any ideas on how to fix this? Edit: I forgot to mention that at the remote side of the link is a Cisco ASA-5505 and at the main office there is a Cisco ASA-5510. The link is connected between these 2 devices and the routing is handled in the 5510. Thanks, Michael

    Read the article

  • What should I be doing while I wait for a progress bar?

    - by Malnizzle
    So I am sitting here waiting for a progress bar to run (20 mins or so), and was wondering how best to use my time as a SysAdmin. I debated not posting this question briefly, as this could get flagged as subjective, but I think it's an important question, and a question that can be legitimately answered (per the FAQ) I know this something a lot of sys admins deal with, especially if they are client-based I would venture to guess. There is a lot of material out there about how to multi task, but SysAdmin work is unique in this area as well. I could switch over to another project, but I could get wrapped up in that, and forget about the original project I was working on, and that's hard if you are billing a client for your time, both for tracking your time, as well as being fair to that client. I could check ServerFault, but that isn't directly work related, I could sort my email, so on and so forth. What do you do, or what should I do when I have time waiting for a progress bar? Thanks! (download done, back to work!)

    Read the article

  • Help me understand Ubuntu user/group permissions.

    - by Bartek
    I'm beginning to deal with more than one user on my system (it's a VPS serving some sites) and I need to make sure I understand how group permissions work. Here's my setup: I have an account named "admin" .. it's basically the primary account that is used for serving most of the sites that I control myself. Now, I added a second account named "Ville" as one of my users wants to be able to administer that site. So, I can do this the easy way and just chown their domains folder under the ville user and viola, they have permission to do whatever they need be and so forth. However, let's say I want to also give the admin user access to the files (modifying and all) .. how can I put both users into the same group and give them both permission? I've tried doing: sudo usermod -a -G admin ville To add the ville into the admin group, but ville still cannot edit files by admin. Permissions for the primary directory for the ville user are read/write for both owner and group, and the current group for the files is admin:admin .. But ville still can't write into the directory. So, what should I be doing here to get this right and secure at the same time? Thank you.

    Read the article

  • Tips and Suggestions IP Address Re-Addressing?

    - by RSXAdmin
    Hello serverfault Universe, My ever evolving and expanding local area network is currently using a class-C address. My network consists of multiple subnets depending on site/location. 192.168.1.x is site HQ 192.168.5.x is secondary site 192.168.10.x is so on and so forth. Long story short - I have inherited this network design from the previous admin who has left the company which started off with a dozen people and now has just over 300 full time/part time employees. We do not yet have client VPN access; but we do have site to site VPN setup. My question is, in preparation for outside client access to my network via Cisco ASA, I would like to re-address the HQ site because I understand a 192.168.1.x or 192.168.0.x are not very good choices for a company subnet - it may conflict with a home user's LAN when connecting to my LAN, I believe? Through your experience, does anyone out there have any suggestions and tips on how I can proceed with re-addressing my subnets. If I designed this network I would have gone with a 10.0.0.0 (mask 255.255.255.0) so I am leaning towards changing it to fit. Thank you.

    Read the article

  • GNU Screen and Finch Not Playing Nicely

    - by Sean M
    I use finch for instant messaging, and for persistence, finch is one of the things that runs in my screen session. There are three main computers that I access my screen session from, and each works at a different screen resolution. Because of the different resolutions, when I switch computers, I use screen -rd to attach to my screen session. Using screen -x results in problems. When I attach to the session, though, finch experiences display problems. I have to wait up to several minutes for finch to become responsive - it doesn't redraw properly at all. Trying to switch between chats just writes ^n and ^p, or ^(1-9) for numbers. It fixes itself after some time. Using ctrl-l does not help. Switching back and forth between screen windows does not help. This is an annoying behavior that I don't experience with any other applications running in screen. Is this a bug in screen or finch, and if not, what can I change about my configuration to correct it ? (would appreciate it if "finch" could be used as a tag for this instead of or in addition to "pidgin")

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >