Search Results

Search found 25785 results on 1032 pages for 'oracles solution for soc'.

Page 209/1032 | < Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >

  • How to get german QWERTY on Windows?

    - by Arturas M
    Well I'm used to having the world standard keyboard which is qwerty and not qwertz... But on windows I can't find the choice for german input which would be qwerty, not qwertz. In linux there was german input with qwerty so it was fun. I believe it should be on Windows too? Cause i'm sick of this qwertz always having to correct and search for z or y... Yeah, I know, if Germans made a mistake, it wasn't loosing WW2... Edit: Maybe I wasn't clear enough, it's not about changing between different language inputs? It's about having all german keys in german input, just the z and y would be in the correct places like all the worlds keyboards use and like the US keyboard uses... Solution: Well unfortunately by searching everywhere and waiting for answers I couldn't find what I needed, so I came up with this solution - I used The The Microsoft Keyboard Layout Creator And created my own very custom layout by modifying the default german layout and switching places of z and y. In case somebody needs it and doesn't want to go the hard way, I decided to upload them, so here are the links to download and install the keyboard layout: http://www20.zippyshare.com/v/95071447/file.html http://rapidshare.com/files/3822150342/German%20QWERTY%202%20by%20Arturas%20M.zip It will appear as a choice under German input between languages in your language bar. If you just want to use this german layout, just remove all german layouts and install this one. Good luck and have fun using it!

    Read the article

  • fcgiwrap listening to a unix socket file: how to change file permissions

    - by user36520
    I have a web server (nginx) and a CGI application (gitweb) that is ran with fcgiwrap to enable Fast CGI access to it. I want the Fast CGI protocol to take place over a unix socket file. To start the fcgiwrap daemon, I run: setuidgid git fcgiwrap -s "unix:$PWD/fastcgi.sock" (this is a daemontools daemon) The problem is that my web server runs as the user www-data and not the user git. And fcgiwrap creates the socket fastcgi.sock with user git, group git and read only fort the non owner. Thus, nginc with the user www-data can't access the socket. Apparently, fcgiwrap is not able to select permissions of unix socket files. And this is quite annoying. Moreover, if I manage to have the socket file exists before I run fcgiwrap (which is quite difficult given I did not find any shell command to create a socket file), it quits with the following error: Failed to bind: Address already in use The only solution I found is to start the server the following way: rm -f fastcgi.sock # Ensure that the socket doesn't already exists (sleep 5; chgrp www-data fastcgi.sock; chmod g+w fastcgi.sock) & exec setuidgid git fcgiwrap -s "unix:$PWD/fastcgi.sock" Which is far from the most elegant solution. Can you think of anything better ? Thanks

    Read the article

  • What applications do people use for windows folder management? How do you switch between folders in

    - by 118184489189799176898
    I'm always switching between client's folders in different applications like photoshop, sql manager, explorer etc. It's so slow to go between them, navigate to the folder and it's still too slow to copy and paste the directory etc. It's so annoying to do. Someone must have a good solution. I was thinking if there was a "recently accessed" folders list available within every folder explorer window... so in any application, if i go "file open" it will have something, somewhere that lists the recently accessed folders - that would be really helpful. I am aware of the recent places folder in win7, but this sucks because it is not sorted by date accessed. Perhaps if there was a way to change this then this would become a decent feature? Is there some application that already does this? i'm sure someone has already solved this issue in a more elegant solution than I can think off. I'm keen to know what programs people use or how people addresss this issue? Thanks...

    Read the article

  • iPhone Lag Terrible - SLOW - What's going on with the iPhone OS?

    - by Sam Schutte
    I've had my iPhone 3G for about a year now, and it seems like at least once a month, it gets bogged down and gets slower and slower - horrible lag when typing, going back to the home screen or opening an app can take 20 seconds. Has anyone else run into this and found "the" solution. What you always read on other boards is to reboot the handset (hold down home and the power button), but that doesn't improve anything for me. I've reinstalled the OS like 5 times now, and I'm getting pretty sick of doing it so often. And I don't buy that it's a hardware issue really, since it works fine for weeks after a fresh install. Anyone have a solution or an idea of what specific actions cause this kind of evident data corruption (OS corruption?) and slowness? Note - I'm looking for specific things here. That is, has anyone done the research to see exactly what on the phone operating system is getting messed up that causes this lag (which is discussed all over the internet, with no working solutions). I don't own a mac, so I can't delve into the guts of the iPhone very well to see what's up with it... Some additional info: Reboots (hold down power/home) and "Sleeps" (slide off) do nothing. Only fresh re-installs help I only have about 15 apps installed - sometimes you see the answer to uninstall apps if you have too many, I'd hope that 15 isn't too many, and even when I've had none installed, it still gets hung up after a period of time. This phone is not jailbroken, and it is running the 3.0.1 release.

    Read the article

  • Managing an application across multiple servers, or PXE vs cfEngine/Chef/Puppet

    - by matt
    We have an application that is running on a few (5 or so and will grow) boxes. The hardware is identical in all the machines, and ideally the software would be as well. I have been managing them by hand up until now, and don't want to anymore (static ip addresses, disabling all necessary services, installing required packages...) . Can anyone balance the pros and cons of the following options, or suggest something more intelligent? 1: Individually install centos on all the boxes and manage the configs with chef/cfengine/puppet. This would be good, as I have wanted an excuse to learn to use one of applications, but I don't know if this is actually the best solution. 2: Make one box perfect and image it. Serve the image over PXE and whenever I want to make modifications, I can just reboot the boxes from a new image. How do cluster guys normally handle things like having mac addresses in the /etc/sysconfig/network-scripts/ifcfg* files? We use infiniband as well, and it also refuses to start if the hwaddr is wrong. Can these be correctly generated at boot? I'm leaning towards the PXE solution, but I think monitoring with munin or nagios will be a little more complicated with this. Anyone have experience with this type of problem? All the servers have SSDs in them and are fast and powerful. Thanks, matt.

    Read the article

  • Central Authentication For Windows, Linux, Network Devices

    - by mojah
    I'm trying to find a way to centralize user management & authentication for a large collection of Windows & Linux Servers, including network devices (Cisco, HP, Juniper). Options include RADIUS/LDAP/TACACS/... Idea is to keep track with staff changes, and access towards these devices. Preferably a system that is compatible with both Linux, Windows & those network devices. Seems like Windows is the most stubborn of them all, for Linux & Network equipment it's easier to implement a solution (using PAM.D for instance). Should we look for an Active Directory/Domain Controller solution for Windows? Fun sidenote; we also manage client systems, that are often already in a domain. Trust-relationships between Domain Controllers isn't always an option for us (due to client security restrictions). I'd love to hear fresh ideas on how to implement such a centralized authentication "portal" for those systems.

    Read the article

  • SSL wildcard certificates and trailing 'www'

    - by user173326
    I've got a wildcard SSL certificate for *.mydomain.com. I'm using nginx, and redirecting all traffic for http to https, and also rewriting the URLs without a trailing www (if there is one). So it has, 1) http://subdomain.mydomain.com ---> https://subdomain.mydomain.com 2) http://www.subdomain.mydomain.com ---> https://subdomain.mydomain.com 3) https://www.subdomain.mydomain.com ---> https://subdomain.mydomain.com 4) https://subdomain.mydomain.com ---> https://subdomain.mydomain.com However, since my cert is for *.mydomain.com, case 3 gets an SSL error in chrome ('This is probably not the site that you are looking for!'), but if you click through it gets redirected and all is well. I understand why, since the initial connection is for https with a www (2 levels of subdomains), which doesn't match what is on the wildcard certificate. I thought a solution would be to get an additional cert for *.*.mydomain.com to cover www.*.mydomain.com. But it seems like that won't work. I spoke to agents from namecheap and comodo, and both said *.*.mydomain.com was not possible. I also came across this: https://support.quovadisglobal.com/KB/a60/will-ssl-work-with-multilevel-wildcards.aspx Is there a solution to this? To be able to cover www.*.mydomain.com?

    Read the article

  • Using virtualization infrastructure for J2EE application distribution- viable alternative?

    - by Dan
    Our company builds custom J2EE web solutions. At the moment, we use standard J2EE distribution mechanisms (ear/war archives). Application servers are generally administered by our clients' IT departments and since we do not have complete control over the environment, a lot of entropy can be introduced into the solution. For example: latest app. server patch not applied conflicting third party libraries inside the app. server root server runtime and tuning parameters not configured (for example, number of connections in database pool) We are looking into using virtualization infrastructure for J2EE application distribution. Instead of sending the ear/war archive, we’d send image with application server node and our application preinstalled. Some of the benefits are same as using with using virtualization infrastructure in general, namely better use of hardware resources. For us, it reduces the entropy of hosting infrastructure - distributing VM should be less affected by hosting environment. So far, the downside I see can be in application server licenses, here they will have to use dedicated servers for our solution, but this is generally already done that way. Also, there is a complexity with maintaining virtualization infrastructure, but this is often something IT departments have more experience with than with administering and fine-tuning J2EE solutions. Anyone has experience with this model? What are the downsides? Will we not just replace one type of complexity with other?

    Read the article

  • Paid antivirus solutions for Windows

    - by AP Erebus
    NOTE: If your looking for recommendations on free antivirus, check this question: http://superuser.com/questions/2/free-antivirus-solutions-for-windows Much like the above, I'm curious to opinions on the best PAID antivirus solution, personal or commercial. Enterprise solutions are welcome and as much detail regarding costs is welcome. Personally I'm looking for a licence that will grant me more than 1 computer install and quality technical support, for personal use. as in the free antivirus question: See if your antivirus of choice is already listed. Chances are it is. If you spot an answer that mentions one you already use, vote that up if you think it's a good solution. If you know of a feature or drawback not listed, or can include experiences in dealing with it, please edit the answer accordingly. If you know of any that can also be used at work please point this out. This covers all Windows platforms from XP, Vista and Windows 7. If you see an existing entry that needs an update or to add your testimonial, please do.

    Read the article

  • Splitting build cross the network?

    - by Dandikas
    Is there a known solution for splitting build process cross the network machines? Use case: We are an average software development company. We own around 50 development workstations (Quad Core 2.66Ghz, 4 GB ram, 200 GB raid). No need to tell that at any single moment not every machine is loaded to the max. There are 5 to 15 projects running simultaneously at any single moment. Obviously all of them are continuously build on server, than deployed to proper environment. Single project build is taking from 3 to 15 minutes. The problem: Whenever we build 5 projects in a row the last project is going to be ready after around 25 - 50 minutes. Building in parallel does not solve the problem (build is only a part of the game, than you need to deploy, run tests etc.) YES the correct solution is to add another build server, but "That involves buying new Expensive hardware, and we already spent a lot!". Yea, right(damn them)! Anyway. What about splitting build among developers workstation? Lets say whenever we need to build project "A" we check 5 workstations and start build on all that are not overloaded. The build can be canceled by a developer if he really needs all the power of his machine as long as there is at least 1 machine that is still building. After build is finished deployment can be performed to a proper environment (hosted on some server, not on workstation :) ). The bigger the company the more this makes sense to me. Anyone tried something like this? Are there any good practices? Any helpful software?

    Read the article

  • How do I load balance between two Linux machines?

    - by William Hilsum
    Inspired by the Stack Overflow network, I am now obsessed with HAProxy and trying to use it myself. At the moment, each HAProxy box has got two network cards (well, two configured, I can have a maximum of 4 and wasn't sure if they needed their own one for management between the boxes). On both machines, the backend one (eth1) is a private IP that goes to a switch connected to the webservers, and the front facing one (eth0) has a public internet IP that is routed straight though. In addition, I have created an additional virtual ip for eth0 called eth0:0 which has got a third public ip address. I just about get how to use it for load balancing between multiple web servers that are behind it, but, I am failing to load balance between the two HAProxy boxes - they appear to fight for the virtual IP, but, this does not appear to be a smart solution. Now, by using the virtual shared IP address, this solution appears to work and does seem to give me maximum uptime, but, is this the correct way to do it, or is there a smarter way? I have been looking at other Linux packages such as keepalived, but, I have only been using Linux (server) for a week now and am at the limits of my understanding. Is there anyone who has done this before and can you advise anything for maximum uptime?

    Read the article

  • Install and enforce a scheduled task across a Windows domain

    - by Ricket
    We have a small domain of about 70 Windows computers (XP and 7). We want to schedule a command (an update mechanism) to run on all computers periodically, and we want the task to run regardless of the computer's connection to our network (i.e. the task should run even on a laptop that isn't connected to our VPN). We have a Microsoft System Center Essentials 2010 server so that might come in handy. The options I see are these: Do it completely manually. Install the scheduled task by hand or remotely using psexec (and the at command?) for each computer in our network. Enforce that newly imaged computers should have this task installed on them before deployed to the employee, or the task should be in the image. High initial cost (having to do this for each of 70 computers) but building it into the image might work... But there is some maintenance in making sure the task is added to everything. And I fear that a year or two down the road, we will have forgotten about it or gotten sloppy or had new IT employees who miss this step and some computers won't have the task. Having one of our servers run a script that loops through all computers and psexec's the command on each computer in the network -- it would only run on running, connected computers, so this solution wouldn't work. I suspect SCE could do something like this too, but again this is not a good solution. Neither of these are ideal, and I'm certain there is a better way to do it -- right? What is the best way to accomplish this task?

    Read the article

  • Walkthrough/guide building aplication server for multi tenant web app [on hold]

    - by Khalid Adisendjaja
    The web app will detect a subdomain such as tenant1.app.com, tenant2.app.com, etc to identify tenant environment, each tenant environment will have a different database credential (port,db name,etc) but still connecting to the same database server. Each tenant should use app.com for their main domain, using their own domain is prohibitted. Each tenant will have their own rest api endpoint such as tenant1.app.com/api/v1/xxxx, tenant2.app.com/api/v1/xxxx, tenant3.app.com/api/v1/xxxx I've come to a simple solution by setting a wildcard subdomain (*.app.com) on webserver Apache/Nginx vhost configuration file. I have googled so many concept for building a multi-tenant app server but still don't understand how to really done it, what is the right way to do it and what is actually required to do this task. So I've come to this questions, Do I need a proxy server, dns masking, etc.. How to monitor each tenants activity What about server performance, load balancing, and scalability How to setup ssl certificate for each tenant what about application cache for each tenant Is it reliable to use the setup for production etc ... I have a very litte experience on server infrastructure, so I'm looking for a DIY walkthrough, step by step guide, or sophisticate solution ready to implemented for production

    Read the article

  • Monospace font which supports at least both of Korean hangul and the Georgian alphabet?

    - by hippietrail
    Being both a language enthusiast and a programmer, I find myself often doing programming or text processing involving foreign language alphabets and scripts. One annoyance however is that CJK fonts (those which support Chinese, Japanese, and/or Korean) usually only contain glyphs for Latin, Greek, and Cyrillic at best. Often the Asian glyphs will be beautiful but the other glyphs can be quite ugly. Just as often in text editors you can only choose a single font, not one for CJKV and one for other, which will be each used for rendering the appropriate characters. Korean is one of the languages I'm most interested in currently. I only need hangul / hangeul for monospaced editing, hanja isn't common enough to be a problem. Another of the languages I'm currently involved in is Georgian, which has its own alphabet which is a little exotic but has pretty good support in common fonts on Windows and *nix. But I am as yet unable to find a font with good Korean glyphs and also Georgian glyphs. My editor of choice is gVim, so an answer telling me how to set it to use two fonts together would be just as good. Currently I'm using it mostly under Windows 7 so a vim-specific solution would be needed rather than a *nix-specific solution.

    Read the article

  • Share Exchange Calendar Outside Organization

    - by CalCurious
    I'm trying to figure out the best way to meet a user's (Corp-A-User) request to share their calendar with someone at another company (Corp-B-User). We're running Microsoft SBS 2008 with Exchange 2007 and SharePoint. The remote user is running Exchange, version unknown. Corp-A-User wants to give the Corp-B-User the ability to create appointments on Corp-A-User's calendar. This will naturally require sharing of Free/Busy information. Corp-A-User naturally lacks the vision to seen ANY problem with giving Corp-B-User full access to their calendar. But, I see the problems with that and would prefer that Corp-B-User have only the ability to see Free/Busy and create appointments. Most of the external publishing options that I have thought of, such as WebDav, allow displaying a user's calendar, but there are problems with security and the ability to create appointments. Right now, I'm thinking the cleanest solution would be to use a Google calendar along with Google Calendar Sync for the two user's Outlook clients. But, I'm not sure if there isn't a better way and I hate teh idea of pushing a corporate calendar up to Google. Not to mention the issues likely to pop up from the multiple sync paths. Does any one have a good solution for this scenario that would be willing to share what they use?

    Read the article

  • Send keystrokes simultaneously to both host and slave over internet?

    - by donodarazao
    I would like to watch movies with a friend who lives far away from me. For this, the playback should be synchronized on both our pc. However, we have some constraints: Due to our low bandwidth internet, any form of streaming solution wouldn't work. We do however both have the same copy of the movie on our harddisks. We use movies to learn languages and because of this, we very frequently pause and rewind. The typical "3...2...1...go!" solution over skype wouldn't work because it would soon get out of sync. I imagine an approach that sends keystrokes simultaneously to both our pc would work (for example, if I press space to pause the movie at my pc, space should also be send to his pc). Any ideas how this could be realized? I looked into Synergy and InputDirector, but both neither seem to be an option, because I don't want to see the desktop of my friend, I want to see my desktop Keystrokes should be sent simultaneously to both pc, not just to one pc We have both Windows 7x64, and we might use any media player (VLC, XBMC,...).

    Read the article

  • Autologin 2 Windows users OR Login another user from the desktop

    - by fpdragon
    I'm using two windows users on my HTPC at the same time. One is just for watching videos and one for administration via remote. This setup is quite ideal for me since windows can handle multiple concurrent logins and the win "rdp concurrent hack" (Google). The problem is, I want both users to be logged in automatically when the pc was started. It shall be possible to watch tv and also the admin user shall be automatically logged in to start my scripts and other tasks, even if I haven't logged in via remote desktop manually. Later, when I want to admin my htpc I can just rdp connect the admin user without interrupting the video playback on the actual HTPC's screen and check my cleanup tasks, downloads, ... witch already executed for this admin user. But right now I found no solution to automatically login user A from a user B desktop and I also found no solution to autologin both users immediately at startup. As a workaround I have to fire up my other notebook machine and login one time with the remote user via rdp. From this time on the remote admin user is running concurrent with the main user in the background of the machine. The other workaround would be... after startup switch user from main user to admin user and then back again. But that also requires manual steps. I'm on a Windows 8 System right now but all infos for Win7 or XP would be also interesting. thanks a lot for all ideas. PS: just to prevent useless posts... don't tell me that only one user can be logged in to windows. ;)

    Read the article

  • Is it possible to change "working directory" of XeTeX?

    - by Herbert Sitz
    Using XeTeX there are many working files that get created in process of producing the pdf, and they litter the directory where my main .tex file is. Is it possible to change the working directory of XeTeX so that it stores all these scratch files in some other directory, out of the way? There is a previous question on Superuser.com that discusses a utility that cleans up the working files by deleting them after they're produced: http://superuser.com/questions/95712/how-to-avoid-littering-ones-tex-directories-with-intermediate-files That solution doesn't work for me since I'm using XeTeX, but also it seems like it would be preferable to simply be able to designate a "scratch" directory where all working files are saved. I haven't been able to find any info on how to do it though. Is there a way? (My question is prompted partly because of the fact that I often work with files in a directory that is shared using DropBox, so it creates a lot of unnecessary traffic if files are getting created and destroyed willy nilly. I don't know if it affects speed in any way, but the idea of having a separate working directory that is not shared/replicated by DropBox would be a cleaner solution, even if I could use the method suggested in the earlier thread.)

    Read the article

  • Update a bootable OS X drive clone with rsync?

    - by Joe
    The question: is it possible to keep a boot-able backup drive clone of OS X updated with rsync? If rsync is not a viable option are there alternatives? The Setup: My situation is as shown above. One internal Samsung 840 SSD [120g] in use as my OS X 10.8 boot disk on a recent model Mac Mini. I have successfully cloned that drive with disk utility to a 125g partition of another HDD in an external USB 3 enclosure and at that point I am able to boot to it. The Goal: As my last system went out in a fiery blaze taking much valuable data with it, I have a new respect for a proper backup solution and really want to do this right. My goal is to achieve an automated differential backup/update from Disk A to Disk B while most importantly maintaining boot-ability on the external drive. And I would prefer to do this differentially to minimize stress on the drives. Hence rsync was the first thing to come to mind. What I have tried: following along with Jamie Zawinski's differential mac bootable backup solution running this manually initially worked - i tested it with only very miniscule file change and everything was fine / external booted and all. now after subsequent passes rsync fails throwing errors particularly relating to updating 'boot.efi' (not at the machine currently I will update the precise log message once I return home) is this a drive partition size issue? does rsync require more space? if it cant be done, are there any alternatives? i've heard whispers of dd

    Read the article

  • Does this exist: a standardized way of documenting a file-system structure

    - by eegg
    At work, I'm in charge of maintaining the organization of a whole lot of varied data on a standard file-system. Part of this is coming up with sensible classification (by similarity, need, read/write access, etc), but the bigger part is actually documenting it: what documents/files/media should go where, what should not be in this directory, "for something slightly different, see ../../other-dir", etc. At the moment, I've documented this using a plaintext file filing.txt in every directory I want to document. If someone is unsure what's meant to be in any directory, they read that file. This works alright, but it seems odd that I have this primitive custom solution to a problem that any maintainer of a non-trivial directory structure must experience. Every company I've known of, for example, has some kind of shared file-system where agreed terminology for categorization is important. In my experience, people just have to learn what's what by trial-and-error and experimentation. So allow me to propose a better solution, and hopefully you can tell me if it exists. Any directory on any filesystem can have a hidden plaintext file named .filing. Its contents are descriptive human language. It uses some markup like Markdown, with little more than bold, italic, and (relative) hyperlinks to other directories. Now a suitably-enabled file browser will check for a file named .filing whenever it displays a directory. If it exists, its contents are parsed and displayed in an unobtrusive pane near the directory-path widget. Any links therein can be clicked, and the user will be taken to the target directory of that link. I think that the effort of implementing such a standard would pay back many times over in usability gains. We would have, say, plugins for Nautilus, Konqueror, etc.. It could be used to display directory information in the standard file lists served by webservers. And so on. So, question: does such a thing exist? If not, why not? Do people think it's a worthwhile idea?

    Read the article

  • iptables to block non-VPN-traffic if not through tun0

    - by dacrow
    I have a dedicated Webserver running Debian 6 and some Apache, Tomcat, Asterisk and Mail-stuff. Now we needed to add VPN support for a special program. We installed OpenVPN and registered with a VPN provider. The connection works well and we have a virtual tun0 interface for tunneling. To archive the goal for only tunneling a single program through VPN, we start the program with sudo -u username -g groupname command and added a iptables rule to mark all traffic coming from groupname iptables -t mangle -A OUTPUT -m owner --gid-owner groupname -j MARK --set-mark 42 Afterwards we tell iptables to to some SNAT and tell ip route to use special routing table for marked traffic packets. Problem: if the VPN failes, there is a chance that the special to-be-tunneled program communicates over the normal eth0 interface. Desired solution: All marked traffic should not be allowed to go directly through eth0, it has to go through tun0 first. I tried the following commands which didn't work: iptables -A OUTPUT -m owner --gid-owner groupname ! -o tun0 -j REJECT iptables -A OUTPUT -m owner --gid-owner groupname -o eth0 -j REJECT It might be the problem, that the above iptable-rules didn't work due to the fact, that the packets are first marked, then put into tun0 and then transmitted by eth0 while they are still marked.. I don't know how to de-mark them after in tun0 or to tell iptables, that all marked packet may pass eth0, if they where in tun0 before or if they going to the gateway of my VPN provider. Does someone has any idea to a solution? Some config infos: iptables -nL -v --line-numbers -t mangle Chain OUTPUT (policy ACCEPT 11M packets, 9798M bytes) num pkts bytes target prot opt in out source destination 1 591K 50M MARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner GID match 1005 MARK set 0x2a 2 82812 6938K CONNMARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner GID match 1005 CONNMARK save iptables -nL -v --line-numbers -t nat Chain POSTROUTING (policy ACCEPT 393 packets, 23908 bytes) num pkts bytes target prot opt in out source destination 1 15 1052 SNAT all -- * tun0 0.0.0.0/0 0.0.0.0/0 mark match 0x2a to:VPN_IP ip rule add from all fwmark 42 lookup 42 ip route show table 42 default via VPN_IP dev tun0

    Read the article

  • Is there a screen sharing/remote desktop app for mac that lets you use a different host screen resolution?

    - by MarqueIV
    Ok, there are tons and tons of questions about remote desktop for mac and they're all being closed as duplicates. I however am specifically looking for one that will let me use a different resolution than the host, the way you can with Remote Desktop for Windows. For instance, when I connect to my 11" Macbook Air booted into Windows7 from my quad-screen desktop, also booted into Win7 using Microsoft's Remote Desktop Client, it blanks out the screen on the notebook, then virtualizes the video across all four of my desktop's monitors at their native resolutions (2560x1600, 2 x 1920x1200 and 1600x1200) and the notebook now acts as if it has four physical monitors connected to it. All of this from a notebook that only has a 1366 x 768 native resolution. Even when running OS X on the client running RDC, while it doesn't support multi-monitors like its Win counterpart, it still lets me run at the native resolution of the client screen of 2560x1600. Again, it just blanks out the host screen while doing so. However when using Mac's screen sharing, since that is just glorified VNC, it just mirrors what's already on the host's screen, meaning it will always be a single screen with the resolution of 1366x768. This of course makes sense since VNC is a mirroring solution, not a video-virtualizing one like RDC, but it means that on my quad-monitor setup, the remote window isn't even large enough to fill up a single monitor, let alone four (unless you have a client that can scale it up, but that's video scaling. It's still only 1366x768.) So what I'm looking for is if there is a solution on the Mac that lets me do the same thing as RDC in a Win environment. Don't care if I have to pay. I'd gladly pay several hundred dollars for this. I just need that specific feature. Note: People have suggested various VNC clients, but the VNC host still runs at 1366x768 so that will not work here. Ever. Also, people have suggested Synergy/Synergy+/Teleport and such which share the keyboard and mouse, not video. Completely different animal unrelated to what I'm looking for.

    Read the article

  • Preferred mail system/server for a company?

    - by Trevoke
    Say you are responsible for setting up an email solution at a company. Which would be your choice? I know of the following options, but many of them not well: Gordano Mail System Exchange Exim Postfix Qmail Zimbra For having used it a little over two years, I really, really like Gordano Mail System. They offer a whole bunch of things, like calendaring, anti-spam, anti-virus, extremely complete and filterable logging options, aliases, a customizable webmail interface... And their software can be installed on both a Windows or Linux OS. In addition, their support is top-notch, their knowledgebase comprehensive (and, I will admit with a touch of pride, I have contributed, with my questions, to the addition of a few articles in there). Of course, they're not free, which can be a problem, but they're not Exchange, and they do offer pretty much everything that Exchange offers -- which is great if you want to stay away from that, but need all the features. Although, if you need a Blackberry Exchange Server, or something similar, I'm not sure what you should go for. So.. What would your choice be? Why? I've never played with a more DIY email solution, but I'm sure many people here have and wouldn't trade their setup for the world :)

    Read the article

  • using a second computer as a mere screen/monitor in X (VNC?)

    - by lara michaels
    Hello My goal is to use three monitors with my Linux system. It is a laptop, so adding another video card is not the easiest solution. (I have investigated a number of such options: getting a docking station with a PCI slot, USB/Cardbus vga adapters, etc, and for the time being don't want to go that way.) I am wondering if using an older desktop+screen I have lying around as the third "monitor" might be the easiest solution, if only there is a way to get it to work as a seamless, integrated desktop. I was wondering if I can use VNC or perhaps X itself (?) to achieve the following: computer A is my main computer; it has all my files, etc. computer B is used just to display on an additional screen keyboard+mouse are connected to computer A use VNC or X to connect the two so that computer B shows a X screen that is just as if it was a third physical screen connected to computer A. I don't know if the last point is clear, but what I mean is that I would like to be able to: be able to have my window manager assign/move around virtual desktops on all three screens move windows back and forth between the screens attached to computer A and the screen of computer B be able to copy something in an app being shown on a screen of computer A and paste it into an app being shown on the screen attached to computer B access the filesystem on my main computer (A) when using applications that are being shown on the screen attached to computer B Basically, I would like X to treat computer B just like it was nothing but a third physical screen... Is this doable? : ) ~lara

    Read the article

  • Remote paging with Nagios when network is down and email won't work -- cellular modems and alternatives

    - by Quinten
    What is the best option for remote paging when network services are down? I'm looking for a solution that can let me know when network services are down during off-hours only, and especially when email/smtp services are out. Therefore, it needs to be redundant to our network and power supply. I'm imagining a cellular modem is one option. What's the price range for these? Is anybody using them and feel that they are worth the cost? I'm imagining that it's something we would end up sending an emergency page ~ 1x/month at most, so I'd like the pricing to reflect that--I don't mind a high per-page cost as long as it has a low recurring cost. Another option would be to expose at least one server to remote ping, and run a check script on a remote server. Are there paid options for this? Currently, we run Nagios on a Linux VM on a Windows 2008 Hyper-V host. It would be great if the solution would work in that environment, but I know it's tricky with external devices, and we could move Nagios to a standalone workstation if needed.

    Read the article

< Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >