Search Results

Search found 10548 results on 422 pages for 'standard deviation'.

Page 310/422 | < Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >

  • TCP Server Memory management: #Connections Vs. #Requests

    - by Andrew
    Given that, there is no theoretical limit to number of concurrent TCP connections a Windows 2008 server can handle. Only thing will happen is, with each connection there will be memory consumption in server. Unfortunately, memory is not unlimited (and I want to utilize only physical memory). For example, lets say we've 2GB server memory. Now there are two extreme cases: Case 1: If we've allocated 64KB buffer for each connection (only to receive incoming request), then 32768 connections can consume all the 2GB of memory. This will not leave any memory to queue/process incoming requests from those connections. Case 2: On the other hand, lets say a single (or very few) connections continuously keeps sending request buffers (for example, video streaming from one connection to other) and server cannot process them within time, those buffers will get piled up in server and eventually will occupy most of the servers memory. And it will not leave any memory for new connection thereafter. This is the real dilemma in server design bugging me badly for last many days. If I can decide on max size of request buffer per connection and max number of requests to allow in queue per connection. Then, based on available server memory, it will then automatically set limit on max number of concurrent connections. How to decide on these limits to achieve best performance and throughput? I am just looking for perfect utilization of server resources. Are there any standard guidelines or empirical data available with someone who can share with me please.

    Read the article

  • OS X can connect to Windows machine, but can't access shared folders

    - by Bonnie
    I can create new folders on my Windows XP machine, set them to "shared". On my Mac, I pick Finder → Go → Connect to Server → smb://192.168.1.4 → Connect → Name / Password. It even shows me all the names of the newly created shared-folders on my PC, but when I try to actually connect to any of them I get connection failed, there was an error connecting Any idea on what would cause that? The fact that it successfully gets so far—to actually showing me my PC share-names—must mean I have 99% of this working correctly, i.e. the physical connection, the IP address, the user name, the password, etc. Still, I can't seem to access the folders themselves. I've tried this with my Windows XP firewall on/off, and Norton AntiVirus on/off. Same problem. Everything did work fine, 4 months ago. Were there any odd OS X or Windows updates released recently? I always apply them all. smbclient on the Mac does correctly find the XP machine, my XP user name, and accepts my XP password. I get the following from that smbclient command: Doing spnego session setup (blob length=16) server didn't supply a full spnego negprot Got challenge flags: ... Got NTLMSSP flags: ... Got NTLMSP flags: ... Domain=[XPMACHINE] OS=[Windows 5.1] Server=[Windows 2000 LAN Manager] tree connect failed: NT_STATUS_INSUFF_SERVER_RESOURCES I'm not sure why a standard XP box can't "supply a full spnego negprot". Whatever that means. Using XP's RegEdit to change my IRPStackSize from 11... to 13, 15, 20, 22... still gives that "NT_STATUS_INSUFF_SERVER_RESOURCES" error on the Mac.

    Read the article

  • Outlook Signature Broken in Entourage

    - by Eric J.
    Some of our company uses Windows with Outlook 2010, and the rest use Mac with Entourage. When our standard signature line is included in an email that goes to Entourage, the result does not display correctly. It appears that Entourage is mangling the HTML. My working theory is that Entourage encounters inline CSS styles it does not know about and stops processing styles, but I'm really not sure. Question: How can I enter a signature into Outlook 2010 that will render correctly in Entourage? For example, can I specify somehow the exact HTML to use? Here's an example of how the HTML is being changed. Original on Outlook, as received by another Outlook client: <span style='font-size:9.0pt;font-family:"Century Gothic","sans-serif"; color:#1785C5'>My Company<br> </span></b><span class=apple-style-span><span style='font-size:9.0pt; font-family:"Century Gothic","sans-serif";color:#666666'>123 Main St.</span></span><span class=apple-style-span><span style='font-size:9.0pt;font-family:"Century Gothic","sans-serif"; color:#AFAFAF'>&nbsp;</span></span><span class=apple-style-span><span style='font-size:9.0pt;font-family:"Century Gothic","sans-serif";color:#666666'>Suite 100</span></span> Note the use of spans, color #1785C5 and color #666666. Same original email, as displayed in an Entourage client: <span style='font-size:9.0pt;font-family:"Century Gothic","sans-serif"; mso-fareast-font-family:"Times New Roman"'><br> <span style='color:#656565'>My Company<br> 123 Main St Suite 100<br> </span> Note the use of br tags rather than spans, and the color #656565.

    Read the article

  • How to set up mysql storage for certain rsyslog input matches?

    - by ylluminate
    I'm draining various logs from Heroku to an rsyslog linux (ubuntu) server and am starting to have a little more to bite off than I can chew in terms of working with my log histories. I am needing to be able drill back in time based on more flexible details and more flexible access than what the standard syslog file(s) provide. I'm thinking that logging to mysql may be the correct approach, but how do I set this up such that it pulls only certain log entries into a table based on an identified? For example, I see a long hex string identifying each log entry from a certain Heroku app instance. I assume that I can just pipe those into the mysql socket vs ALL rsyslog input into mysql... Could someone please direct me to a resource that can walk me through the process of setting something like this up or simply provide some details that can help? I have 15+ years of Unix experience so I just need some nudging in the right direction as I've not really done a tremendous amount of work with syslog daemons previously in terms of pooling various servers into one. Additionally, I'd be interested in any log review tools that could make drilling through log arrangements like this more handy for developers.

    Read the article

  • Finding custom printer-specific options on *nix

    - by enbuyukfener
    Hey all, I have a printer set up using CUPS (FujiXerox Document Center 1100), it is named DC1100. There are capabilities of the printer that are not shows as options of the printer as listed by: lpoptions -l -d DC1100 The output is below: PrintoutMode/Printout Mode: Draft *Normal High Photo InputSlot/Media Source: Upper Lower MultiPurpose LargeCapacity Manual *Standard PageSize/Page Size: *Letter A4 C5 C6 COM10 DL Executive Legal Monarch Statement PageRegion/PageRegion: Letter A4 C5 C6 COM10 DL Executive Legal Monarch Statement STP_Brightness/Brightness: 0.00 0.02 0.04 [snip] 2.00 STP_Contrast/Contrast: 0.00 0.05 0.10 0.15 [snip] 4.00 STP_ColorCorrection/Color Correction: Accurate Bright Density [snip] Uncorrected STP_DitherAlgorithm/Dither Algorithm: Adaptive EvenTone Fast [snip] VeryFast STP_EnableDensity/Density Enable: *Disabled Enabled STP_Density/Density Value: 0.1 0.2 0.3 0.4 [snip] 8.0 STP_EnableGamma/Composite Gamma Enable: *Disabled Enabled STP_Gamma/Composite Gamma Value: 0.10 0.15 0.20 0.25 [snip] 4.00 STP_LinearContrast/Linear Contrast Adjustment: *False True STP_Duplex/Double-Sided Printing: DuplexNoTumble DuplexTumble *None Resolution/Rendering Resolution: *FromPrintoutMode 150x150dpi 300x300dpi 600x600dpi OutputType/Output Type: *FromPrintoutMode BlackAndWhite Grayscale STP_ImageType/Image Type: *FromPrintoutMode Photo Graphics LineArt None Text TextGraphics STP_Resolution/Resolution: *FromPrintoutMode 150dpi 300dpi 600dpi I am particularly looking for options for: "secure print" (possibly by setting a mode and setting a username) stapling hole punching Perhaps I need a vendor specific driver/PPD file? If so, any pointers as I have no idea where to look for one. I haven't been able to find one on the official site or on sites such as http://www.openprinting.org

    Read the article

  • Colorizing your terminal and shell environment?

    - by Stefan Lasiewski
    I spend most of my time working in Unix environments and using Terminal emulators. I try to use color on the commandline, because color makes the output more useful and intuitive. What are some good ways to add color to my terminal environment? What tricks do you do? What pitfals have you encountered? Unfortunately, support for color is wildly variable depending on terminal type, OS, TERM setting, utility, buggy implementations, etc. Here's what I do currently, after alot of experimentation: I tend to set 'TERM=xterm-color', which is supported on most hosts (but not all). I work on a number of different hosts, different OS versions, etc. I'm trying to keep things simple and generic, if possible. Many OSs set things like 'dircolors' and by default, and I don't want to modify this everywhere. So I try to stick with the defaults. Instead tweak my Terminal's color configuration. Use color for some unix commands (ls, grep, less, vim) and the Bash prompt. These commands seem to the standard "ANSI escape sequences" I've managed to find some settings which are widely supported, and which don't print gobbledygook characters in older environments (even FreeBSD4!) (For the most part). From my .bash_profile ### Color support # The Terminal application typically does 'export TERM=term=color' # Some terminal types will print Black, White & underlined with these settings. OS=`uname -s` case "$OS" in "SunOS" ) # Solaris9 ls doesn't allow color, so use special characters instead. LS_OPTS='-F' ;; "Linux" ) # GNU tools supports colors! See dircolors to customize colors export LS_OPTS='--color=auto' # Color support using 'less -R' alias less='less --RAW-CONTROL-CHARS' alias ls='ls ${LS_OPTS} export GREP_OPTIONS="--color=auto" ;; "Darwin"|"FreeBSD") # Most FreeBSD & Apple Darwin supports colors # LS_OPTS="-G" export CLICOLOR=true alias less='less --RAW-CONTROL-CHARS' export GREP_OPTIONS="--color=auto" ;; esac

    Read the article

  • Are there any disadvantages of having a "free fall sensor" on a hard disk drive?

    - by therobyouknow
    This is a general question that came out of a specific comparison between the Western Digital Scorpio WD3200BEKT and Western Digital Scorpio WD3200BJKT (which is the same as the former but with a free fall sensor.) Note: I'm not asking for a review or appraisal of these specific drives, as the general question does apply on other brands as well. Though your input would help my decision. To break down the general question in order to answer it, I would be looking for comments on things like: if it's necessary to have differing physical dimensions between free fall sensor drives and those without, e.g. does it make it any thicker, and therefore reduce the systems where it can be installed - particularly smaller laptops? does it actually make the system less reliable - because of false alarms whereby the drive thought the laptop was falling but it wasn't? I suppose that the fact that a manufacturer produces both drives with and without free fall sensors says something about possible disadvantages. Or it could be standard marketing techniques where by making drives with and without results in larger sales volume than just those with the feature alone.

    Read the article

  • MySQL ODBC + SSL with only the SSL Cipher option?

    - by sdek
    Does anybody know how I can have an SSL encrypted connection over MySQL ODBC without the cert options? So I asked my web host to setup a MySQL+SSL connection so that we can access our website's database via ODBC or MySQL Query Browser (or the likes). I am able to get an encrypted connection with the standard mysql client and MySQL Query Browser, but I can't get the ODBC connection to work. Looking for a little help... The way they set it up is a little different from the way I have read about on the interweb. The host didn't setup a cert, or at least I don't think so - I don't need to specify any cert options in my connection. I just need to specify the ssl cipher. Here is how I connect with the mysql client: mysql -h myhost.com -u myuser --ssl-cipher=3DES -p That works to get an encrypted connection. At least I am pretty sure it works because when I run mysql> \s I get SSL: Cipher in use is EDH-RSA-DES-CBC3-SHA Also, when I put EDH-RSA-DES-CBC3-SHA into the SSL Cipher field of MySQL Query Browser (without specifying any other SSL options) it connects just fine. But then when I try to do the same thing with my MySQL ODBC 3.5.1 and 5.1 I get a generic error. Here is the error from the 5.1 Driver. Connection Failed: [HY000] [MySQL][ODBC 5.1 Driver]SSL connection error

    Read the article

  • Multiple *NIX Accounts with Identical UID

    - by Tim
    I am curious whether there is a standard expected behavior and whether it is considered bad practice when creating more than one account on Linux/Unix that have the same UID. I've done some testing on RHEL5 with this and it behaved as I expected, but I don't know if I'm tempting fate using this trick. As an example, let's say I have two accounts with the same IDs: a1:$1$4zIl1:5000:5000::/home/a1:/bin/bash a2:$1$bmh92:5000:5000::/home/a2:/bin/bash What this means is: I can log in to each account using its own password. Files I create will have the same UID. Tools such as "ls -l" will list the UID as the first entry in the file (a1 in this case). I avoid any permissions or ownership problems between the two accounts because they are really the same user. I get login auditing for each account, so I have better granularity into tracking what is happening on the system. So my questions are: Is this ability designed or is it just the way it happens to work? Is this going to be consistent across *nix variants? Is this accepted practice? Are there unintended consequences to this practice? Note, the idea here is to use this for system accounts and not normal user accounts.

    Read the article

  • How to organize deployment process in Chef-controlled environment?

    - by Alex
    I have a web Linux-based infrastructure which consists of 15 virtual machines and over 50 various services. It is fully controlled by Chef. Most of the services are developed internally. Basically the current deployment process is triggered by a shell script. A build system (a mix of Python and shell scripts) packages the services as .deb files and puts these packages into a repo. It runs apt-get update on all 15 nodes then because the standard Chef apt cookbook only runs apt-get once per day and we definitely do not want to run apt-get update unconditionally on each chef-client wake. The build system restarts chef-client daemons on all 15 nodes finally (we need this step because of pull Chef nature). The current process has a number of drawbacks we want to address. First off, it is asynchronous because the deployment script does not check chef-client logs after restart so we don't even know if the deployment was successful. It does not even wait for Chef clients to complete the cycle. Second, we definitely do not want to force chef-client restarts on all nodes because we usually deploy only a small number of packages. And third, I am not quite sure using chef-client for deployment is legitimate, probably we are just doing it wrong from the start. Please share your thoughts/experience.

    Read the article

  • EXCEL workbook, intermitently, takes 30 seconds to load

    - by Julio Nobre
    I am trying to figure out why a simple .XLS EXCEL workbook is taking, randomly, 30 seconds to open. Before answering: Please, bear mind the following: Problem symptoms Hanging is intermitent and it takes exactly 30 seconds; During hanging there is no cpu or disk activity; It only happens during workbook load. Every runs smooth after that; Windows Explorer.exe hangs on folder, but all other folders, system and applications are still responsive; There are no consecutive hangings. I have to wait for while to reproduce this behaviour; All workbooks where located on a local drive (C:\BPI); The workbook has no macros and no addins; Office 2003 is being used for several years; The computer is running Windows XP; Computer has several network mapped drives, all addressed to main file server; Recently, main fileserver was replaced by Windows 2011 SBS Standard Edition What I have done so far I have traced machine Explorer.exe, using Process Monitor, added Duration column, and filtered by Duration 1. That's is how I found that hanging was taking exactly 30 seconds. For further information, please refer to Oliver Salzburg tutorial. Using Process Monitor, I have also figured out than five operations were taking most of sample collecting duration. Looking at sample image below, column Operation below you will notice that one single operation was taking 29 seconds; I have tried different workbooks (all of them smaller than 30 KB); I have, temporarily, removed all shortcuts on User Document's folder that were pointing to network drives or shares; I have runned CCleaner to fix registry issues; I made sure that there were no external links on tested workbooks; I have reproduced this behaviour for hours; I have extensivelly researched for hours on the web; Process Monitor's collected and filtered data

    Read the article

  • Insufficient channel capacity of 1GBit

    - by Roman S
    There is a Caching Server (Varnish): it receives data from Amazon S3 on request, saves it for some time and gives it to the client. We have encountered the problem of insufficient channel capacity of 1GBit. Peak load within 4 hours completely chokes the channel. Server performance is sufficient for now. Approximately 4.5TB of data are transmitted per day. More than 100TB are accumulated per month. The first thought that comes to mind is simply to add one more 1GBit port and sleep peacefully until 2GBit are not enough (it may happen quite quickly) or one server is not able to handle it. And then we just need to add new Caching Servers. But now we need a Load Balancer, which will send requests on one and the same URL, always on one and the same server (to avoid multiple copies of the same cached objects). Here are the questions: Does a Balancer need a band equal to sum of all bands of Caching Servers? What shall we do in case there are no ports in a Balancer? Should we add more Balancers or solve the problem by means of Round robin DNS? What are the standard approaches to such problems? Can anyone advise hosting-companies, which can solve this problem? We are interested in American and European markets.

    Read the article

  • SeLinux blocking connection to sshd on Ubuntu 9.10

    - by Barton Chittenden
    When I try to log on to my laptop, which runs Ubuntu 9.10, the server rejects my login attempts. Checking /var/log/auth.log, I see the following: Feb 14 12:41:16 tiger-laptop sshd[6798]: error: ssh_selinux_getctxbyname: Failed to get default SELinux security context for tiger I googled for this, and ran across the following: http://www.spinics.net/lists/fedora-.../msg13049.html Here's the part that I think relates to the problem that I'm having: Quote: What's wrong on my system? Why it's not possible to login even if selinux is in permissive mode? Any suggestions? I'd start by trying to figure out why sshd isn't running in sshd_t (it seems to be running in sysadm_t). Paul. selinux mailing list selinux@xxxxxxxxxxxxxxxxxxxxxxx https://admin.fedoraproject.org/mail...stinfo/selinux Yes, sshd is running in sysadm_t: ps axZ | grep sshd system_u:system_r:sysadm_t 3632 ? Ss 0:00 /usr/sbin/sshd -o PidFile=/var/run/sshd.init.pi ls -Z /usr/sbin/sshd system_ubject_r:sshd_exec_t /usr/sbin/sshd Don't know why it's not sshd_t. I didn't modified something. It's a standard installation of sles11 with the default reference policy from tresys. Maybe this code snippet from policy/modules/services/ssh.te is responsible for that: Allow ssh logins as sysadm_r:sysadm_t gen_tunable(ssh_sysadm_login, true) Any ideas? Do you have boolean init_upstart set to on? if not try setting it to on. I do not believe ssh_sysadm_login boolean works currently but i may be mistaken. -- Yeah, setting init_upstart to on did the trick! THANK A LOT! Do you know why this prevents the user from logging in through ssh even if selinux is set to permissive?? Ok, so the million dollar question is "where do I set 'init_upstart=1'"? It's not clear from context which configuration file needs to be edited, and I'm not at all familiar with SELinux configuration.

    Read the article

  • Tomcat OutOfMemory after switching JVM

    - by Mathias
    I have a Tomcat6 server running on Debian squeeze there are 4 webapps running on it, and an in-JVM ActiveMQ server. It has been running for about a year with the same memorysettings, with openjdk-6. Everything has worked dandy, no issues at all. Now, for various reasons, i need to try out Sun's JDK. So, I installed sun's jvm with standard apt-get apt-get install sun-java6-bin , and switched using update-java-alternatives -s java-6-sun However, when i start tomcat, i get outofmemory, the server won't even start! If i switch back to openJDK, all works fine again. I haven't had any memory issues on this server before, so it feels strange that the server suddenly won't start with sun's JDK. Anybody have any clue as to why this might happen? Have i missed something? I naturally have set heap sizes etc. in tomcat. Currently running with: -Xms256m -Xmx1024m Which as mentioned works in openSDK, outofmemory in sun-jdk... EDIT: server is 64-bit, openJDK and Sun are 1.6.0, both 64-bit JVMs.

    Read the article

  • How can I pause console output in rxvt?

    - by Javid Jamae
    I'm running rxvt in Cygwin on a Windows box. This is how I invoke it: rxvt -sr -sl 2500 -sb -geometry 90x30 -tn rxvt -fn "Lucida Console-14" -e /usr/bin/bash --login -i Anyone know how to pause the console output in rxvt? I can use Ctrl-S / Ctrl-Q to pause / un-pause, but this won't work if a script is already running and spewing output to stdout. Highlighting the terminal window with the mouse doesn't seem to work like with other consoles such as the standard Cygwin console, or the Windows command prompt console. Some sort of scroll lock would be nice, but I can't seem to find any way to do this. I know I could just pipe my output to a file, but I want a way to pause the output for something that I didn't expect to explode with console output. Basically I want to scroll back while its running without it constantly moving me to the bottom of the output buffer as it updates more data to stdout. I don't particularly care if the solution given actually pauses the script (like when you highlight the mouse in the Windows Command window), or just scroll locks and let's me scroll while its still running the underlying script, though I'd like to know how to do both if its possible.

    Read the article

  • Windows 7 loses correct time zone upon reboot

    - by Android Eve
    I have a standard PC running Windows 7 Ultimate (64-bit). For some reason, it refuses to keep the correct time zone (the BIOS battery is OK) when restarted. Note (1): The Time zone is correct. The "Internet Time" tab also shows "this computer is set to automatically synchronize with 'time.windows.com'. When I click the 'Change settings...' button, the 'Synchronize with an Internet time server' checkbox is checked. Still, upon reboot, the time is skewed by 6 hours... and doesn't correct itself even after waiting hours for this "automatically synchronize" to occur. Note (2): The BIOS time is set to local (i.e. not UTC). When I restart Windows 7 without booting to the other OS installed in dual-boot config (Ubuntu Linux), it seems to correctly remember the time. This may explain immediate time upon reboot, but it doesn't explain why Windows 7 won't automatically 'Synchronize with an Internet time server' even after an hour. Why is this happening and how do I correct this?

    Read the article

  • What does "incoming" and "outgoing" traffic mean?

    - by mgibsonbr
    I've seen many resources explaining how to set up a server's firewall to allow incoming and outgoing traffic on HTTP standard ports (80 and 443), but I can't figure out why I would need either of them. Do I need to unblock both for a "regular" web site to work? For file uploads to work? Are there situations where it would be advisable to unblock one and leave the other blocked? Sorry if that's a basic question, but I couldn't find it explained anywhere (also I'm not a native english speaker). I know in a "regular" web site the client is always the one who initiates a request, so I'm assuming a web server must accept incoming traffic on those ports, and my common sense tells me the server is allowed to send a response without unblocking anything else (otherwise it wouldn't make sense to have two types of rules). Is that correct? But what is an outgoing web (service) traffic, and what would be its use? AFAIK if the server wanted to initiate a connection with another machine, the specific port that matters is the one in the other end (i.e. the destination port would be 80), on its end any free port could be used (the source port would be random). I can open HTTP requests from my server (using wget for instance) without unblocking anything. So I'm assuming my concepts of "incoming" and "outgoing" are wrong somehow.

    Read the article

  • Desktop Provisioning for a Small Linux Software Development Team

    - by deakblue
    Goal: Get a small team using a standard development image rather than 4 software devs setting up their own environments. Why: it takes a day or days to install a distro, build-specific libraries, tools like editors and IDEs, mysql, couchdb, java, maven, python, android-sdk, etc. It's a giant PITA that when repeated 4 times by 4 developers (not sys admins) wastes time and generates annoying divergences that crop up later (it-builds-on-my-box syndrome). There's no sharing of productivity, settings, tricks, scripts, set-ups. Some of this is helped by segregating the build systems into headless virtualbox images. This doesn't really address tooling though or the GUI-desktop dev that needs doing. So I see three basic strategies, ghosting, virtualization, and finally creating a kind of in-house linux distro (I guess Google does something like this). The target dev environment is based on Debian OpenBox and must allow a mix of 3rd gen Core i7 notebooks 8GB-minimum to work both single and multihead. Important, the lappies are not the same, but a mix of 2012 macbooks and PCs. So: virtualization: is doing all of your work within a VM, like VirtualBox, practical on this hardware or annoying. ghosting: will laptops from different manufacturers make this impractical. DIY distro: short of scripting a bunch of package installs, I don't know if there's any "distro-maker" that could keep this from being an epic project of scripting package installs. So any advice?

    Read the article

  • Interesting phenomenom with Windows Server 2008 R2 user access controls and NTFS ACLs

    - by Simon Catlin
    One to try, and I'd appreciate any thoughts on this. On a Windows Server 2008 R2 box (or presumably 2008 R1, Windows Vista or Windows 7): i) Logon as an administrator, and create a new NTFS volume ii) Blow away the standard MS ACLS on the root of the volume (which are laughable), and replace with Administrators:Full Control, System:Full Control, e.g.: echo Y|cacls.exe d:\ /g "Administrators:F" "SYSTEM:F" iii) Now, from a Command Prompt shell window or PowerShell window, switch to that drive (cd /d D:\ or set-location D:\ ). Works fine... no issues. iv) Now, try to browse to the root of the new volume using MS Explorer... Access denied. Now, I've kind of convinced myself that it is UAC getting in the way, as you can add "Authenticated Users:List" access to D:\ and Explorer then works. I can only assume that MS Explorer isn't able to use the "admin" token for the Administrator. Browsing to explorer.exe and doing a "Run as administrator" has no effect. Any thoughts? Cheers in advance.

    Read the article

  • Few questions about a good projector on my PC and tv?

    - by jasondavis
    I have always wanted a projector for my tv, satelite, cable, and even PC in a spare bedroom. Well it's more of a home office that I spend most my time in and the catch here is it is a small room. Room is only the standard 8foot tall. Room is about 13 feet wide on the wall where I would like to mount the project and the wall where the screen would be for it. So only about 13 feet away from projector to screen. I would like to know... 1) From experience or knowledge what would be a good projector I could hook up to my satelite box and also my PC? Cheaper is better in this case but I would still like the best image for my buck and something reliable. There is no sunlight in the room either to worry about. 2) From that distance of about 12-13 feet away, how big of a clear picture could I expect? 3) What kind of cables would I need to purchase and run through my attic to my cable/satelite receiver box as well as my PC? 4) These cables in question 3 would most likely need to be a good 15-20feet in length to reach, would I need anything special for that to work at those distances?

    Read the article

  • Disk wipe preferences

    - by hmvm123
    I manage a pool of systems that are loaded with software and sent to potential customers for evaluations which often land sensitive information on the drives. Before shipping them back, they typically like a standard wipe to be run to clean out the drives. Most are familiar with DBAN so I try to make sure it can work on my systems. Unfortunately, this means I'm usually in RAID driver hell trying to make sure that the versions out there support the ones my systems are shipping with. These are various kinds of 3ware and LSI ones. Consequently, I have DBAN 1.0.7 working on some, a beta version of 2.0 on the others and 2.2.6 on some of the latest SSD based ones. Now with the LSI controllers on my IBM x3550 M3s (1064/1068) I'm getting no love at all. Is there a way out? Do you buildroot with DBAN and try to piece the drivers together? Any other tools, free or commerical, that stay updated. I'm trying to walk people of varying technical proficiencies through this, so a boot disk with simple choices is preferable.

    Read the article

  • Why would I need a firewall if my server is well configured?

    - by Aitch
    I admin a handful of cloud-based (VPS) servers for the company I work for. The servers are minimal ubuntu installs that run bits of LAMP stacks / inbound data collection (rsync). The data is large but not personal, financial or anything like that (ie not that interesting) Clearly on here people are forever asking about configuring firewalls and such like. I use a bunch of approaches to secure the servers, for example (but not restricted to) ssh on non standard ports; no password typing, only known ssh keys from known ips for login etc https, and restricted shells (rssh) generally only from known keys/ips servers are minimal, up to date and patched regularly use things like rkhunter, cfengine, lynis denyhosts etc for monitoring I have extensive experience of unix sys admin. I'm confident I know what I'm doing in my setups. I configure /etc files. I have never felt a compelling need to install stuff like firewalls: iptables etc. Put aside for a moment the issues of physical security of the VPS. Q? I can't decide whether I am being naive or the incremental protection a fw might offer is worth the effort of learning / installing and the additional complexity (packages, config files, possible support etc) on the servers. To date (touch wood) I've never had any problems with security but I am not complacent about it either.

    Read the article

  • Failure to connect to admin share pops up dialog

    - by Jan
    I'm having an issue with a curious error message when accessing the administrative share on a remote machine. Specifically, the client is logged in as the domain administrator on the machine A, and runs some code that tries to access the admin share on B (a domain member). The access is done in .NET, along these lines (though I am not sure if the method of access makes a difference): string path = @"\\B\admin$"; if (Directory.Exists(path)) { try { path += @"\temp\"; if (!Directory.Exists(path)) { Directory.CreateDirectory(path); } path += "myfile_remote"; File.Copy("myfile", path); Now, on some machines this fails. That is not a big problem as we have a fallback. I'd like to know why but it is not the real issue. The problem is that running this piece of code causes a dialog box to pop up for the logged-in user on B, saying "network error trying to access \\B\admin$\temp\myfile_remote. Contact the network administrator and ask for the correct permissions". Unfortunately, it is a foreign language Windows so I'll spare you all posting a screenshot. It is skinned like a standard Windows dialog box. Why exactly is that dialog box popping up for the user and is there anything I can do about it? Edit to add: B is a Windows 7 Enterprise installation. The client is not aware of any GPO policies being installed. There is AV from Trend Micro installed.

    Read the article

  • Transferring an SQL Processor License to a virtual hosted environment

    - by Andrew Shepherd
    My company is currently hosting a service in-house, and we want to move to an externally hosted environment. We would then be using a virtual server. I understand that this might be spread across multiple machines, but from my perspective as a customer, this layer is abstracted away - I shouldn't know or care about the hardware that the OS is hosted on. We have a licensed edition of SQL Server 2008. This is one Processor license. Will it be a violation of the licensing agreement to use this in a virtual environment. From the reference guide here it says When licensed Per Processor With Workgroup, Web, and Standard editions, for each server to which you have assigned the required number of per processor licenses, you may run, at any one time, any number of instances of the server software in physical and virtual operating system environments on the licensed server. However, the total number of physical and virtual processors used by those operating system environments cannot exceed the number of software licenses assigned to that server For enterprise edition there is an added option: if all physical processors in a machine have been licensed, then you may run unlimited instances of SQL server 2008 in one physical and an unlimited number of virtual operating environments on that same machine. I'm having trouble getting my head around this. Would I theoretically have to get a license for every processor in this virtual environment (which is effectively impossible because I have no way of knowing how many processors there actually are)? Or can I just say that it's hosted on one "virtual" server, so that's OK?

    Read the article

  • Webcam becomes "Unknown Device" after Windows Messenger 2011 is installed

    - by Boris
    I have Sony VAIO VGN-NS290J laptop. I installed Windows 7 Ultimate 64-bit. I was able to find drivers for all hardware without any problems. Recently, I installed Microsoft Windows Live Essentials 2011, i.e. Windows Live Messenger 2011. Ever since that application is running on my computer, my webcam is not recognized by the OS any more. It is listed as the "Unknown Device" and placed in the Universal Serial Bus controllers group in the Device Manager. There don't seem to be any drivers for this webcam. It's a standard Sony Motion Eye web camera and Sony does not offer any drivers for it. There is one application to download that utilizes the camera, but there are no drivers (and the system is showing the same behavior regardless of the presence of the application). It happens from time to time that the webcam becomes recognized by the OS again, after a couple of restarts; but not always. Then it becomes unknown again. I am absolutely positive that this issue is caused by the Windows Live Messenger 2011, because same symptoms caused the same effects before. I wish to be able to continue to use this software, but also to use my webcam. I was wondering if anyone had a similar issue and if there is a way to fix it. Thanks for all the help, I appreciate it. Update: I have discovered a pattern - if the camera goes astray, restarting the machine does not bring it back; but switching the computer off and turning it back on does. Every time! This is getting super complicated :)

    Read the article

< Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >