Search Results

Search found 7415 results on 297 pages for 'man behind'.

Page 216/297 | < Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >

  • Windows 7 remote desktop encryption error every few minutes

    - by rfrankel
    Because of an error in data encryption, this session will now end. This is the error I've been getting more and more frequently over the past few days, to the point that I can't ignore it because it's happening consistently within 5 minutes of connecting - sometimes within a few seconds. Both the remote and local machines are Windows 7 Pro x64. The remote machine is behind a Linksys RV082, and I'm using UPnP to forward a remote port to the correct local port. This setup had been working fine for several months, and I can't think of any recent relevant changes that might have been made. Things I've already tried: Disabling unnecessary components of the network connection on the remote machine, until only IPv4 and Client for Microsoft Networks remain. Disabling TCP large send offload on both the remote and local machines. Confirming that the remote machine is not mentioned anywhere in any DMZ settings on the Linksys router. Confirming that there are no x509-related registry keys screwing things up (this is the suggested fix for a slightly different error anyway). These are the only solutions I've been able to find after about an hour of searching, and most of them apply to XP or Server 2003 in any case. If anyone could suggest something else, it would be much appreciated.

    Read the article

  • port forwarding problem

    - by Claudiu
    I want to set up an svn server on my computer, so it's available from anywhere. I think I set up the repository correctly, using CollabSVN. If I go to Repo-Browser with TortoiseSVN and point it to svn://localhost:3690, it shows the proper repository. The problem now is that I'm behind a router. My local IP is 192.168.1.45 . Doing svn://192.168.1.45:3690 also works. My global IP is, say, x.x.x.x. Just doing svn://x.x.x.x:3690 doesn't work, which makes sense, since I have to set up port forwarding. I'm using a Verizon router. Using their web interface (on 192.168.1.1) I added the following port forwarding rule: IP Address forward to: 192.168.1.45 Source Ports: Any Dest Ports: 3690 Forward to: 3690 Protocol: TCP However, even after applying this rule, going to svn://x.x.x.x:3690 doesn't work. It takes a few seconds to fail, then says that the connection couldn't be established because the server connected to didn't respond properly after a period of time. What's interesting is that a random port, like svn://x.x.x.x:36904 fails immediately, saying that the target machine actively refused the connection. So I figure that the forwarding rule did something, but not fully what was necessary. Any ideas on how to get this working? The router model is MI424-WR and the firmware version is 4.0.16.1.56.0.10.12.3. UPDATE: I also tried setting destination port to 45000, and still forwarding to 3690, in case something was wrong w/ the lower-numbered ports, but to no avail. I also tried port 80 to port 3690, still all in vain.

    Read the article

  • how to download vim script in command line

    - by HaiYuan Zhang
    whenever I want to install a new vim script to the linux server I'm working in , my typical workflow is as the following: surf the plugin's homepage in vim online using fireXXXX download a right version of the plugin to my laptop by click some highlighted link upload the downloaded plugin from my laptop to linux server using winscp which is really inconvenient. I don't know what is the magic behind this : I mean for the same hyperlinki click it in web browser I can let you download it but use wget plus the hyperlink in linux commandline will end up with nothing but error indication. hyperlink in web browser . otherwise I can get the link in web browser and then use wget or some similar tool to actually do the downloding. I try new cool vim scripts quite often , so you can imagin my dismay when have to repeat the tedious action all the time. So if anyone of you knows some tips which can let me downloading the vim scripts in a more "professional" way, I'll appreciate it a lot. post edit : My problem is not find a tool like wget or curl . The problem I met is quite specific to use these tools to download vim script. let's take http://www.vim.org/scripts/script.php?script_id=30 as an example, it's the normal place where one can get the script, at least for me. but I can't find an working url from this page that can feed to wget .

    Read the article

  • How to diagnose remote assistance problem

    - by cantabilesoftware
    I have a long standing issue with remote assistance between a home and work PC. My wife and I both use MSN messenger and I used to be able to control her PC at home via MSN Remote Assistance. Some time ago however this stopped working and I don't know why. We're both running the latest versions of MSN Live Messenger and I've checked the appropriate firewall ports are open, but it still doesn't work and MSN just says something useless like "The person isn't responding". Any suggestions for how can I diagnose this? More info: I just tried direct Remote Desktop between work PC and home PC and it works fine - so I presume all the appropriate ports are open. Just Remote Assistance doesn't work. I'd like to get RA working so I can demonstrate how to do things remotely. With Remote Desktop the person at the other end gets booted off and can't see. With Remote Assistance they can follow along step by step. Some comments below suggest using other solutions, which is fine and do work, but there must be a way to diagnose RA and get it working. Experimenting with this some more, the notebook that I was using at work today that refused to connect works fine for remote assistance when I bring it home. So I guess this must be a problem with our network configuration at work. I've checked that 3389 is open on firewall on office router and remote desktop works both ways.... just not remote assistance. I've read that remote assitance won't work if client and server are both behind Non-UPnP/NAT routers. If one has UPnP it's supposed to work. Office router doesn't have UPnP enabled but my home one does. I've also scoured the event logs on both ends, nothing noteworthy - unless I'm looking in the wrong spot). Note (copied from comment): I've just tried ShowMyPC which is based on VNC and it works, but I'd still like to figure out what's wrong with RA - it's just bugging me. The question is only about Remote Assistance, no need to propose solutions based on other programs.[/edit by Gnoupi]

    Read the article

  • Change Windows 7 Explorer's Details Pane limits

    - by Paul
    For some reason, MS decided to completely kill the status bar's functionality in Win7 (and maybe Vista, but I don't know for sure). I have tried all possible options such as Classic Shell and so on. Basically, the one thing I miss most is seeing at a glance the total size of my selected files. I know I can press Alt+Enter or whatever, but that's not the point. The point is that the so-called 'details' pane stops providing details if more than 15 files are selected! WTH? Cannot understand the reason behind such a stupid arbitrary limit, that doesn't seem to be user-configurable at all. Anyway, what I'm looking for is a way to change that limit, either via the registry or otherwise. If changing the limit isn't possible, would it be possible for some programming genius to create a very small optimized light-weight non resource-hungry (you get the idea!) program whose sole purpose will be to automatically click the "Show more details..." link in the details pane after every file selection? For bonus points, the program should wait for a second or two after file selection is complete to do this, so that selecting multiple files in quick succession doesn't hit the HDD repeatedly. Any takers for the challenge? :)

    Read the article

  • NTLM, Kerberos and F5 switch issues

    - by G33kKahuna
    I'm supporting an IIS based application that is scaled out into web and application servers. Both web and applications run behind IIS. The application is NTLM capable when IIS is configured to authenticate via Kerberos. It's been working so far without a glitch. Now, I'm trying to bring in 2 F5 switches, 1 in front of the web and another in front of the application servers. 2 F5 instances (say ips 185 & 186) are sitting on a LINUX host. F5 to F5 looks for a NAT IP (say ips 194, 195 and 196). Created a DNS entry for all IPs including NAT and ran a SETSPN command to register the IIS service account to be trusted at HTTP, HOST and domain level. With the Web F5 turned on and with eachweb server connecting to a cardinal app server, when the user connects to the Web F5 domain name, trust works and user authenticates without a problem. However, when app load balancer is turned on and web servers are pointed to the new F5 app domain name, user gets 401. IIS log shows no authenticated username and shows a 401 status. Wireshark does show negotiate ticket header passed into the system. Any ideas or suggestions are much appreciated. Please advice.

    Read the article

  • how to go about scaling a web-application ?

    - by phoenix24
    for someone whoes been primarily a web-application developer, and know not much about scaling/scalability techniques. I'll start by stating my application is written in Python, using Django; a fairly standard setup. I currently use Apache 2.2 for my webserver, and MySql for my database server; both running on the same vps server. Up until now, it was basically a prototype and merely 15-30 concurrent users at any given time; so I had no issues, but now since we'll be adding more users we'll have severe performance issues. So my question is how do i go about scaling my web-application? and my plan is as follows. Now I have just one vps server running, apache + mysql. Next, I plan to add another vps server, to run only MySql; so i'll have one web-server and one db server. Next, I'll add Memcache to the webserver for caching data; and taking some load off mysql. Next, another web-server for serving all the static content; Next, a vps server for load-balancing (nginx/varnish) behind which would be my two web-servers and then db-server. Does that sound like a workable strategy, please guide me around here.

    Read the article

  • Recommendation on remote access setup for accessing customer systems

    - by gregmac
    I'm looking for a product recommendation (open or commercial) that will allow remote access to customer sites for tech support purposes. We need to be able to gain access to help troubleshoot problems on servers. Currently end up using anything from RDP on public IP, to various VPNs that clients happen to have, to webex-type sessions that require lots of interaction from both sides to get things working. This often means a problem that could take 10 minutes to solve takes an extra 30+ minutes messing around trying to get a connection up. There are multiple customer sites, which should NOT have access to each other. At each site, there is anywhere from 1 to 8 servers (Windows 2003 or 2008) that need to be accessed. Support connection to machines even if they're behind a firewall/router with no public IP Be able to selectively allow/deny access from customer site. Customer site should not be able to connect outbound to anywhere else (our systems, or other customer sites) Support multiple users from our end If not a VPN connection (where RDP could be used over top), should support: Remote desktop access, including copy/paste File transfers Preferably would have some way to list all remote systems, showing online/offline. Anyone have any suggestions?

    Read the article

  • Road Warrior VPN Setup

    - by wobblycogs
    I apologise up front for the rather open ended nature of this question but I've got well out of my depth and could really do with some pointers. I need to set up a road warrior VPN solution which will allow our customers to securely access a number of services we provide for them. Customer machines will be running a variety of Windows versions from XP onwards with a variety of patch levels. Typically they will connect from the clients main offices but not always. It is safe to assume that all clients will be behind NATs but we may occasionally see a connection that isn't NAT'ed. Typical connection situation is therefore: Customer Laptop -- Router (NAT) -- Internet -- VPN Server + Firewall -- Server (Win 2008 R2, Non-routable IP) There will initially be a dozen or so people that could connect but that will grow quickly to around 100. It's unlikely that we'll see that many concurrent connections though, I imagine our total VPN throughput would be <50Mbps peak. What are my options for setting this up? I've been trying to set up a system like this using a MikroTik router for a few days but have struggled to get it working correctly, particularly with NAT'ed clients. I've had a quick look at OpenVPN and liked what I saw but I think it's unlikely our customers IT departments would allow the client to be installed. Finally I've looked at the Cisco ASA range but I'm on a fairly tight budget so this is less preferable but it looks like it would work pretty much out of the box. My fall back position is to connect the server directly and use the provided VPN + Firewall facilities but that is far from ideal as the number of servers is likely to grow over time.

    Read the article

  • How to take screenshots of WPF applications in correct size and content

    - by Thomas W.
    I usually take screenshots of single windows via the built-in key combination Alt+Print. Unfortunately this does not work well for more and more applications - all of them are WPF applications. Usually the screen shots have at least one of the following properties: the screenshot is larger than expected and contains parts of the screen around the actual window the screenshot has the correct size but includes parts of other windows, e.g. the Windows task bar. Of course the task bar might be in front of the window, but taking screen shots of "normal" programs works fine. How do I take screenshots of WPF application which are correct in size and content? I'd like to avoid the extra effort of checking all the screenshots for correctness, reproducing the situation, taking them again in case of issues or repairing/faking them manually in any pixel manipulation program (e.g. Paint.NET). I observe this on Windows 7 x64 SP 1, all official updates installed, but it might apply to other Windows versions as well (not tested yet). .NET 4.5 is installed. The application itself might only need the built-in .NET 3.5.1. It's reproducible on a virtual machine with the same settings. Examples: Screenshot of an application running in maximized mode. The screen shot includes parts of the task bar. Screenshot of a progress dialog which is behind the task bar. The screenshot also includes the task bar, while it doesn't for non WPF applications.

    Read the article

  • Can two mocha for After Effects X-Spline Layers be merged ?

    - by George Profenza
    Hello, I'm new to mocha for After Effects, but like the auto tracking feature. There are a few markers I'm adding, but there's one which is giving me a bit of a headache. I'm tracking a circle that moves around a larger object, so it moves in front of it initially, then behind, being occluded by the larger object, then in front again. I tried to track the circle in the first part(when it's in front, before being occluded) and in the second part(when it's in front again, coming out on the other side of the large object) using a single X-Spline. The problem is the second part starts in a different location and I don't know how to 'move' the X-Spline to the new position and track from there, without affecting the previous keyframes. As a workaround I use two X-Spline Layers, export the data to .txt files, then manually merge the two files into a new one containing keyframes from both X-Spline Layers. Is there an easier way to do this (either merging two X-Spline Layer, or using a single X-Spline Layer that can move to a new location without affecting previous keyframes) ? Any suggestion would help.

    Read the article

  • AWS elastic load balancer basic issues

    - by Jones
    I have an array of EC2 t1.micro instances behind a load balancer and each node can manage ~100 concurrent users before it starts to get wonky. i would THINK if i have 2 such instances it would allow my network to manage 200 concurrent users... apparently not. When i really slam the server (blitz.io) with a full 275 concurrents, it behaves the same as if there is just one node. it goes from 400ms response time to 1.6 seconds (which for a single t1.micro is expected, but not 6). So the question is, am i simply not doing something right or is ELB effectively worthless? Anyone have some wisdom on this? AB logs: Loadbalancer (3x m1.medium) Document Path: /ping/index.html Document Length: 185 bytes Concurrency Level: 100 Time taken for tests: 11.668 seconds Complete requests: 50000 Failed requests: 0 Write errors: 0 Non-2xx responses: 50001 Total transferred: 19850397 bytes HTML transferred: 9250185 bytes Requests per second: 4285.10 [#/sec] (mean) Time per request: 23.337 [ms] (mean) Time per request: 0.233 [ms] (mean, across all concurrent requests) Transfer rate: 1661.35 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 1 2 4.3 2 63 Processing: 2 21 15.1 19 302 Waiting: 2 21 15.0 19 261 Total: 3 23 15.7 21 304 Single instance (1x m1.medium direct connection) Document Path: /ping/index.html Document Length: 185 bytes Concurrency Level: 100 Time taken for tests: 9.597 seconds Complete requests: 50000 Failed requests: 0 Write errors: 0 Non-2xx responses: 50001 Total transferred: 19850397 bytes HTML transferred: 9250185 bytes Requests per second: 5210.19 [#/sec] (mean) Time per request: 19.193 [ms] (mean) Time per request: 0.192 [ms] (mean, across all concurrent requests) Transfer rate: 2020.01 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 1 9 128.9 3 3010 Processing: 1 10 8.7 9 141 Waiting: 1 9 8.7 8 140 Total: 2 19 129.0 12 3020

    Read the article

  • Linux: prevent outgoing TCP flood

    - by Willem
    I run several hundred webservers behind loadbalancers, hosting many different sites with a plethora of applications (of which I have no control). About once every month, one of the sites gets hacked and a flood script is uploaded to attack some bank or political institution. In the past, these were always UDP floods which were effectively resolved by blocking outgoing UDP traffic on the individual webserver. Yesterday they started flooding a large US bank from our servers using many TCP connections to port 80. As these type of connections are perfectly valid for our applications, just blocking them is not an acceptable solution. I am considering the following alternatives. Which one would you recommend? Have you implemented these, and how? Limit on the webserver (iptables) outgoing TCP packets with source port != 80 Same but with queueing (tc) Rate limit outgoing traffic per user per server. Quite an administrative burden, as there are potentially 1000's of different users per application server. Maybe this: how can I limit per user bandwidth? Anything else? Naturally, I'm also looking into ways to minimize the chance of hackers getting into one of our hosted sites, but as that mechanism will never be 100% waterproof, I want to severely limit the impact of an intrusion. Cheers!

    Read the article

  • Tunneling a public IP to a remote machine

    - by Jim Paris
    I have a Linux server A with a block of 5 public IP addresses, 8.8.8.122/29. Currently, 8.8.8.122 is assigned to eth0, and 8.8.8.123 is assigned to eth0:1. I have another Linux machine B in a remote location, behind NAT. I would like to set up an tunnel between the two so that B can use the IP address 8.8.8.123 as its primary IP address. OpenVPN is probably the answer, but I can't quite figure out how to set things up (topology subnet or topology p2p might be appropriate. Or should I be using Ethernet bridging?). Security and encryption is not a big concern at this point, so GRE would be fine too -- machine B will be coming from a known IP address and can be authenticated based on that. How can I do this? Can anyone suggest an OpenVPN config, or some other approach, that could work in this situation? Ideally, it would also be able to handle multiple clients (e.g. share all four of spare IPs with other machines), without letting those clients use IPs to which they are not entitled.

    Read the article

  • Help about pure-ftp

    - by hai
    I setup pure-ftp on freebsd behind firewall. On pure-ftp setuped passsi mode ftp(rangle port 50400-50600) and firewall open port from 50400-50600 (include mode IN and out). But i try use ftp client connect but not connect. Nofinication error status: Connecting to 210.245.89.95:21... Status: Connection established, waiting for welcome message... Response: 220---------- Welcome to Pure-FTPd [privsep] ---------- Response: 220-You are user number 1 of 50 allowed. Response: 220-Local time is now 13:20. Server port: 21. Response: 220-IPv6 connections are also welcome on this server. Response: 220 You will be disconnected after 15 minutes of inactivity. Command: USER bk Response: 331 User bk OK. Password required Command: PASS Response: 230 OK. Current directory is / Command: SYST Response: 215 UNIX Type: L8 Command: FEAT Response: 211-Extensions supported: Response: EPRT Response: IDLE Response: MDTM Response: SIZE Response: REST STREAM Response: MLST type;size*;sizd*;modify*;UNIX.mode*;UNIX.uid*;UNIX.gid*;unique*; Response: MLSD Response: ESTA Response: PASV Response: EPSV Response: SPSV Response: ESTP Response: 211 End. Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is your current location Command: TYPE I Response: 200 TYPE is now 8-bit binary Command: PASV Response: 227 Entering Passive Mode (210,245,88,98,138,1) Command: MLSD Error: Connection timed out Error: Failed to retrieve directory listing Status: Connecting to 210.245.88.98:21... Status: Connection established, waiting for welcome message... Help me.

    Read the article

  • Spring-mvc project can't select from a particular mysql table

    - by Dan Ray
    I'm building a Spring-mvc project (using JPA and Hibernate for DB access) that is running just great locally, on my dev box, with a local MySQL database. Now I'm trying to put a snapshot up on a staging server for my client to play with, and I'm having trouble. Tomcat (after some wrestling) deploys my war file without complaint, and I can get some response from the application over the browser. When I hit my main page, which is behind Spring Security authentication, it redirects me to the login page, which works perfectly. I have Security configured to query the database for user details, and that works fine. In fact, a change to a password in the database is reflected in the behavior of the login form, so I'm confident it IS reaching the database and querying the user table. Once authenticated, we go to the first "real" page of the app, and I get a "data access failure" error. The server's console log gets this line (redacted): ERROR org.hibernate.util.JDBCExceptionReporter - SELECT command denied to user 'myDbUser'@'localhost' for table 'asset' However, if I go to MySQL from the shell using exactly the same creds, I have no problem at all selecting from the asset table: [development@tomcat01stg]$ mysql -u myDbUser -pmyDbPwd dbName ... mysql> \s -------------- mysql Ver 14.12 Distrib 5.0.77, for redhat-linux-gnu (i686) using readline 5.1 Connection id: 199 Current database: dbName Current user: myDbUser@localhost ... UNIX socket: /var/lib/mysql/mysql.sock -------------- mysql> select count(*) from asset; +----------+ | count(*) | +----------+ | 19 | +----------+ 1 row in set (0.00 sec) I've broken down my MySQL access settings, cleaned out the user and re-run the grant commands, set up a version of the user from 'localhost' and another from '%', making sure to flush permissions.... Nothing is changing the behavior of this thing. What gives?

    Read the article

  • HP Proliant DL380 G4 - Can this server still perform in 2011?

    - by BSchriver
    Can the HP Proliant DL380 G4 series server still perform at high a quality in the 2011 IT world? This may sound like a weird question but we are a very small company whose primary business is NOT IT related. So my IT dollars have to stretch a long way. I am in need of a good web and database server. The load and demand for a while will be fairly low so I am not looking nor do I have the money to buy a brand new HP Dl380 G7 series box for $6K. While searching around today I found a company in ATL that buys servers off business leases and then stripes them down to parts. They clean, check and test each part and then custom "rebuild" the server based on whatever specs you request. The interesting thing is they also provide a 3-year warranty on all their servers they sell. I am contemplating buying two of the following: HP Proliant DL380 G4 Dual (2) Intel Xeon 3.6 GHz 800Mhz 1MB Cache processors 8GB PC3200R ECC Memory 6 x 73GB U320 15K rpm SCSI drives Smart Array 6i Card Dual Power Supplies Plus the usual cdrom, dual nic, etc... All this for $750 each or $1500 for two pretty nicely equipped servers. The price then jumps up on the next model up which is the G5 series. It goes from $750 to like $2000 for a comparable server. I just do not have $4000 to buy two servers right now. So back to my original question, if I load Windows 2008 R2 Server and IIS 7 on one of the machines and Windows 2008 R2 server and MS SQL 2008 R2 Server on another machine, what kind of performance might I expect to see from these machines? The facts is this series is now 3 versions behind the G7's and this series of server was built when Windows 200 Server was the dominant OS and Windows 2003 Server was just coming out. If you are running Windows 2008 R2 Server on a G4 with similar or less specs I would love to hear what your performance is like.

    Read the article

  • Varnish, Nginx, Apache, APC, Meteor, Cpanel & Wordpress On A Single Server, Any Good?

    - by Aahan
    Yes, I have read many close questions, but I needed a specific answer and hence this question. First, these are my new server specifications: Linux Server (CentOS), Intel Xeon 3470 Quad Core (2.93GHz x 4) processor, 4 GB DDR3 Memory, 1TB Hard Disk Space, 10 TB Bandwidth and 9 Dedicated IPs. AIM: To speed up my wordpress blog + Increase server's capacity to handle heavy load PLAN: This is how I am planning to setup my server - - VARNISH (in the front, to cache server responses) NGINX (to effectively handle static content & overcome the C10k problem) APACHE (behind Nginx, to effectively deliver dynamic content) APC (PHP page, database & object caching) CPANEL (which requires Apache, and I require it) WORDPRESS W3 TOTAL CACHE (caching plugin for Wordpress). So , will the setup work? Have anyone tried it? Please shower your thoughts and knowledge. NOTE: I can't do without Apache because I am used to that .htaccess & Cpanel stuff. So, it's not any option. All others are options. Please try to help. I hope I am clear in what I wanted to ask.

    Read the article

  • Mounting any Windows share from OS X 10.7.5 suddenly stopped working

    - by user2169619
    I have problem with mounting Windows network shares from my OS X 10.7.5 - it worked but it stopped and nothing helps and nothing in logs. Here what I get when trying to mount it manually: mount -t smbfs //10.0.0.7/d /tmp/test When doing sniffing with tshark - no packet it sent through and I get immediately return from the command: mount_smbfs: server connection failed: Unknown error: -1 Nothing in /var/log/syslog.log and /var/log/kern.log. The Finder does not work either - it throws an error that something is wrong (in czech thus I'm not sending the message here). I just cannot connect to any network shares. In virtual Windows 7 in Parallels Desktop I can connect successfully, but not within the network share (so the Win7 is behind OS X NAT) but only with its own IP address. The Windows server share is on the same network segment connected through switch. Any advice how to debug and what can be wrong? I spent hours to find solution on Google and here but no-one with this kind of problem and I do not know how to further debug it since there is no meaningful log / trace etc. I can ping 10.0.0.7 and I can connect to FTP server on 10.0.0.7 - the Windows machine (XP) has firewall completely turned off. The problem is that with tshark, I'm not seeing any packet sending to 10.0.0.7 so it's not even trying to reach the server.

    Read the article

  • How to get rid of "Maxback Engine" for good?

    - by Jonik
    I used to have a Maxtor Shared Storage II network drive; it broke down long ago already. (Later I tried to recover some data from it, and partially succeeded, but haven't yet fully documented it on that question.) Anyway, I just noticed there are still some lingering bits remaining of the (thourougly crappy) software that came with the Maxtor device: a background process called "MaxBack Engine". I googled around a bit and found something related but not very useful: http://www.straitmac.com/jforum/posts/list/600.page http://discussions.apple.com/thread.jspa?threadID=725692 Under /Applications I found "Maxtor EasyManage.app" which I used to use for controlling the drive, and showed it some "rm -rf". Before deleting, I noted that the bundle did contain "MaxBack Engine.app" under Content/Resources. But still, after reboot, the "MaxBack Engine" process is back. I did notice though that it only appears when logging in with my usual user account; with another account it wasn't launched. So, dear Mac gurus, what could I do about this pest? I guess I could fall back to some Unix hackery and write a cronjob that kills any process with that name, but obviously it'd be nicer to be able to clean up from my computer everything left behind by Maxtor's piece of software.

    Read the article

  • Memory Usage of SQL Server

    - by Ashish
    SQL Server instance on my server is using almost full memory available in my Physical Server. Say if i am having 8GB of RAM than SQL Server is using 7.8 GB of RAM from system. I also have read articles and also read many similar questions regarding same on this forum and i understand that memory is reserved and it is using memory. But i have 2 same servers and 2 SQL Servers, why this is happening on a single SQL Instance not on other. Also when i run DBCC MemoryStatus than it is showing up... VM Reserved 8282008 VM Committed 537936 so from this we know that SQL reserved whole 8GB memory, but why this VM Committed keeps increasing. What i understand is VM Committed is: VM Committed: This value shows the overall amount of VAS that SQL Server has committed. VAS that is committed has been associated with physical memory. So this is the memory SQL Server has committed (from this i understand that physical memory actually SQL Server is using at instance). So like to know the reason behind this ever increasing VM Committed memory on my server and not on another. Thanks in Advance.

    Read the article

  • Debian, 2 NICs load-balancing or agregating with one same gateway

    - by pouney
    Hi, I have one server, with double NICs connected to one switch with the same gateway. Behind the switch we have internet. |Debian| - eth0 - switch - internet - eth1 - same I don't understand how to load-balancing between eth0 and eth1. The inbound/outbound traffic always use eth1. This is the config: # The primary network interface allow-hotplug eth0 auto eth0 iface eth0 inet static address 192.168.248.82 netmask 255.255.255.240 network 192.168.248.80 broadcast 192.168.248.95 gateway 192.168.248.81 allow-hotplug eth1 auto eth1 iface eth1 inet static address 192.168.248.83 netmask 255.255.255.240 network 192.168.248.80 broadcast 192.168.248.95 gateway 192.168.248.81 Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.248.80 0.0.0.0 255.255.255.240 U 0 0 0 eth1 192.168.248.80 0.0.0.0 255.255.255.240 U 0 0 0 eth0 0.0.0.0 192.168.248.81 0.0.0.0 UG 0 0 0 eth1 0.0.0.0 192.168.248.81 0.0.0.0 UG 0 0 0 eth0 Ips aren't real, it's just for the example. Anybody have an idea on correct routing to use eth0 on 192.168.248.82 and eth1 on 192.168.248.83 ? I have many example for multiple gateway but here it's the same. Thanks all. Regards

    Read the article

  • Why does my DD-WRT not accept SSH connections from my laptop?

    - by Vlad Seghete
    So, here is my system: I have a 2Wire AT&T modem/router which I use for wireless and a Buffalo router flashed with DD-WRT which is physically attached to the 2Wire and set in the DMZ. I set everything up on the DD-WRT to be able to connect to it using ssh and also so that it forwards ssh requests on a different port to one of the servers behind it. Now, when I am physically connected to the DD-WRT all this works great and as I would want it to. I ssh into the two different ports using the WAN IP of my network, and I get where I expect to land. If, however, I am connected using wi-fi to the 2Wire, the same commands do not work. I do not get an error, simply a timeout. I have trouble understanding this, since the DD-WRT is set in the DMZ and everything should pass to it. To further complicate the problem, I tried connecting to the same IP using my phone (wireless disabled, so really from the WAN) and surprise, it works! If I go back on the local network by enabling the wifi, the ssh connection times out. To make this even stranger, my WAN IP address always responds to pings (meaning in all the above situations). What could be going on here? I know what I should do, completely disable the 2wire as a router and use it strictly as a modem and them use all the routing capabilities of the dd-wrt. It's what I will probably end up doing anyway, but my question remains, because I really want to know what is happening here.

    Read the article

  • Sane patch schedule for Windows 2003 cluster

    - by sixlettervariables
    We've got a cluster of 75 Win2k3 nodes at work in a coarse grained compute cluster. The cluster is behind a mountain of firewalls and resides in its own VLAN. Jobs of all sizes and types run on the cluster and all of the executables running are custom-made. (ed: additional notes on our executables) The jobs range from 30 seconds to 7 days in duration, and may contain one executable or 2000 sub-jobs (of short duration). Obviously we are trying to avoid the situation where our IT schedules a reboot during a 7 day production job. We have scheduling software which accomodates all of the normal tasks for a coarse grained cluster and we can control which machines are active for submission, etc. If WSUS was in some way scriptable (or the client could state it's availability for shutdown) we could coordinate the two systems and help out. Currently, the patch schedule is the Sunday after Super Tuesday regardless of what is running on the cluster. We have to ask for an exemption every time we want to delay patching a machine for a long running production job. Basically, while our group is responsible for the machines we have little control over IT's patch schedule. Is patching monthly with MS's schedule sane for a production Windows cluster? Are there software hooks in WSUS where we could say, "please don't reboot just yet"?

    Read the article

  • XCOPY access denied error on My Documents folder

    - by Ryan M.
    Here's the situation. We have a file server set up at \fileserver\ that has a folder for every user at \fileserver\users\first.last I'm running an xcopy command to backup the My Documents folder from their computer to their personal folder. The command I'm running is: xcopy "C:\Users\%username%\My Documents\*" "\\fileserver\users\%username%\My Documents" /D /E /O /Y /I I've been silently running this script at login without the users knowing, just so I can get it to work before telling them what it does. After I discovered it wasn't working, I manually ran the batch script that executes the xcopy command on one of their computers and get an access denied error. I then logged into a test account on my own computer and got the same error. I checked all the permissions for the share and security and they're set to how I want them. I can manually browse to that folder and create new files. I can drag and drop items into the \fileserver\users\first.last location and it works great. So I try something else to try and find the source of the access denied problem. I ran an xcopy command to copy the My Documents folder to a different location on the same machine and I still got the access denied error! So xcopy seems to be denied access when it tries to copy the My Documents folder. Any suggestions on how I can get this working? Anyone know the reason behind the access denied error?

    Read the article

< Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >