Search Results

Search found 9147 results on 366 pages for 'big smile'.

Page 271/366 | < Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >

  • Looking for a real DisplayPort hub/splitter

    - by squircle
    In my search for a new display, I came across the Dell Multi-Monitor Hub MMH11, which seemed to be an alternative to my search for daisy-chainable DisplayPort displays. However, before I cave and spend $179 on this device, I am wondering if this will be similar to other splitting devices where it appears to the computer as one big monitor and the device does the splitting (which I don't want). Or, does this use the packet-based nature of DisplayPort to present two/three separate displays to the computer? Also, would this device work on my MacBook Pro? (I know the Dell site says it's for Windows, but it also says that no driver installation is required. I'd assume since the MBP supports DP 1.2 it would work, but it's better to ask). Thanks! Edit: I've checked out the similar-looking Cirago DisplayPort splitter, but I have extreme doubts as to whether or not it's a genuine displayport splitter, or just another monitor-conglomerate. Their DVI solution looks identical to Dell's, which I'm pretty sure won't do what I want. I also don't want to order this DisplayPort "hub" and find that it doesn't do what I want it to.

    Read the article

  • processes slow after some time of actively running

    - by Yervand Aghababyan
    i have several cron jobs running on an ubuntu machine. each one does some pretty heavy load stuff. The cron jobs are parsing files and the bigger the file the longer it takes them to parse it. The strange thing is that if i make the files too big ( like 30mb) the script kind of hangs. It starts processing them really enthusiastically but after some time (something like 5-10 minutes) the cpu usage of the process drops a lot and it gets into some "zombie" state. If prior to this the process in htop was using 70-80% of the CPU then after this drop occurs it slows down to something like 5-10%. the load average drops down as well. The status of the processes sometimes changes to D in htop, which AFAIR stands for zombie. Today i noticed the same behavior of processes of mysql when executing heavy queries (a query took something like 4 hours to execute). the cron jobs are mostly php and during their processing most of the CPU eats the php process and not mysql. so i think the issue is not with a specific language/program but with the way the processes are "managed". The only other place i've seen similar behavior was on my Amazon EC2 micro instance when after some aggressive use of CPU the CPU quota was taking effect and everything was slowing down dramatically. This is a dedicated machine running ubuntu. what may be the cause?

    Read the article

  • What is the max connections via remote desktop for a small server?

    - by Jay Wen
    I have a small server running MS Server 2012. The CPU is a Xeon E3-1230 V2 @ 3.30GHz, 4 Cores, 8 Logical Processors, 8 GB RAM. Main HD is a Samsung 840, and the big storage is a 4 disk WD Black Raid 10 Array in a Synology NAS enclusure. My question is: given this hardware, approximately how many users can the system support via "Remote Desktop Connection"? Assume there are no licensing limits. These are not admin users. I know there is a two admin limit. This boils down to: What resources does one remote connection require? RAM? % of the CPU? Networking bandwidth? I guess the base case would be for a conection where the user is inactive or simply browsing cnn. Once you know this, you know how many you could fit on the machine before something is maxed-out. In reality, users would be mostly on Excel (multi-MB spreadsheets). I know the approx. resources currently required by each copy of Excel.

    Read the article

  • Formatting a memory stick with two partitions?

    - by Marius
    I have a 16GB memorystick which used to have a Linux partition. It therefore has two partitions; 2GB FAT32 and 14GB linux boot drive. The linux part stopped working, so I decided to reinstall it. But windows can't see that partition. I tried formatting the whole disk, but I can only format one partition (the FAT32). There seems to be no way to combine the two partitions into one big one, and there seems to be no way for windows to partition the large part of the memorystick to but Linux on it. In the windows partition manager, windows sees the large unused partition, and it let me delete it. But once I have deleted it, I'm not allowed to format it. Also I cannot delete or resize the small partition. So, to summarize: I have a memorystick with two partitons. Windows only sees one of them, and won't let me use the other one. I would like to combine the two partitions so I can install Linux on the memory stick again.

    Read the article

  • How to share text, pictures and video to restricted set of users

    - by joaoc
    I want to share pictures and videos of my kid growing up but I don't want it to be open to the public. I just want me and my wife to be able to add material and then share with grandparents and friends who would be given a username/password or some other solution that authenticates them. This content would preferably be available remotely over the internet or downloaded periodically to the allowed viewers computers. The second option most likely means they would have to install a client, which I am not too fond of doing (I would have to ask them to install it, help configure for different platforms, ...) I'm at the start but I would like software that also allows export of the data (if one day I want to migrate to a different solution). Folder sharing I don't know if Windows folder sharing is reliable, robust and safe enough to share over the internet. Dropbox could be a software solution, but it's limited to 2GB and requires receivers to also install and configure software. A folder is just a collection of files, so it would be harder to keep them organized with text describing the pictures/video Email I could email the content every now and then but videos are almost always too big to go as attachments This would also not provide a history of updates to anyone just recently joining in Are there other solutions to achieve this? NOTE: I also thought that software to run a local or remote blog could be an option but we aren't allowed to ask those questions on superuser because one admin doesn't like cloud-computing questions. But if you do think it's the best approach and can suggest a solution, do state it in the answers anyway!

    Read the article

  • Puzzling TCP performance over 3G / UMTS

    - by lemonsqueeze
    I'm using 3G as my primary internet connection, and TCP over this thing is getting more puzzling every day. For example: Downloading from kernel.org is crazy fast: $wget http://www.kernel.org/pub/linux/kernel/v3.0/linux-3.6.8.tar.bz2 increases to ~500kB/s after a few secs ! Some servers are incredibly slow, for instance www.graphic-pc.com:Same thing, downloading a big file with wget it starts at ~30kB/s for a split second, then collapses to 5-10k or even worse. Web browsing is decent but somewhat unreliable. Randomly, a page will take really long to load or even fail to load, but a reload can succeed almost immediately. Now, by chance i started playing with OpenVPN over UDP on top of the 3G connection, and OMG suddenly everything's extremely fast !Same www.graphic-pc.com now shoots at 100-200kB/s ! What's going on here ??? How come it is so much better with the VPN than without ?? And why does graphic-pc.com crawl when kernel.org flies ?Something to do with my tcp stack (or the server), or some buggy router in between ?? Notes: Setup is laptop running Ubuntu Lucid and a Huawei 3G dongle (So direct pppd connection). I can reproduce this pretty much any time during the day and I'm not moving, so it's clearly not cell environment or internet congestion. (although kernel.org without VPN sometimes does worse in the evening, 60kB or so - but still 500kB with VPN !) For 2) wireshark shows retransmitted packets, dup ack's, even out of order sometimes. I've tried playing with different /proc/sys/net/ipv4 parameters (tcp_rmem, window_scaling, tcp_congestion...) doesn't seem to make a difference. Update: Tried under windows 7 (no VPN) with some interesting results: tcp settings : default tcp_optimizer kernel.org : 10 kB/s 20 kB/s graphic-pc.com: 8 kB/s 70 kB/s ! tcp_optimizer turned on ctcp among other things. Have to check what os graphic-pc.com is running, my bet is linux's tcp_westwood and ms ctcp don't mix well here...

    Read the article

  • Nginx + Ubuntu 9.10, gzip not functioning

    - by Matt
    Hey there, So I installed and configured Nginx 0.7.62 on a new Slicehost Ubuntu 9.10 slice. All seems to work fine with the server, except that gzip isn't working for one reason or another. I made sure that it's setting were correct in /etc/nginx/nginx.conf: user www-data; worker_processes 3; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; # multi_accept on; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; #tcp_nopush on; keepalive_timeout 2; tcp_nodelay on; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript; gzip_disable "MSIE [1-6]\."; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } This normally wouldn't be a big deal, but gzip support could save considerable bandwidth for my site. Does anyone have any ideas of what to check, or has anyone else run into this problem?

    Read the article

  • Liked Arch Linux - but I am still not sure if its the right distro for me...

    - by BlackAndGold
    As the title says... I used Arch Linux for a while and liked it a lot: leightweight, sleek, fast and well documented with a great community. However I had to format my hd and for the sake of being too lazy to reinstall arch, I used windows 7 exclusively for a while. Now I want to get back to linux again (still dual boot), mainly for web development purposes and using handy tools such as rsync ect. Again, I liked Arch, but there is too much tweeking, too much reading up and too much figuring out what to do as well es some bad suprises especially when you need them the least and when just quickly want to get some work done. I kind of would like to have a little more "out of the box" distro that is still fast and somewhat leighweight and of course reliable. I actually considered Ubuntu, which I am not too big of a fan of, but I will still give the minimal install a shot. However other distros seem interesting as well, such as crunchbang and mint debian ecpecially. My question, hoping this isn't too boring for many you, what is the right distro for me?

    Read the article

  • Why does BitLocker need a minimum volume size of 64 MB?

    - by Iszi
    Since the future of TrueCrypt appears to be still unclear, I figured I'd try to get my stuff migrated into BitLocker at least for the time being. I nearly never have to access my encrypted data from anything that's not BitLocker-capable, so cross-platform compatibility isn't a big deal to me at this time. However, I am having a bit of an issue understanding the minimum requirement of a 64 MB volume. With TrueCrypt, I was able to protect small files (and most of my protected files are fairly small) in containers down to 300 KB or even less. When I finally created a VHD of an appropriate size last night (100 MB), it seemed the file system itself only took up about 3 MB and encrypting it with BitLocker didn't appear to take up any more. While 3 MB is still an order of magnitude larger than the smallest volume I could make with TrueCrypt, it's still relatively reasonable in comparison to 64 MB. This is an especially large amount of overhead (and largely wasted at that, since it's mostly empty space for now) when I consider that some of these volumes will be stored and synced in the cloud. What possible reasons could BitLocker have for needing volumes to be 64 MB large, when it's not even appearing to use that space? BitLocker FAQ on TechNet

    Read the article

  • Mac mini simple customized, Mac mini server or other?

    - by microspino
    I'm in front of a big IT choice for my little office and I need some advice. We have 5 users, 1 super user, 1 HP500 DesignJet Plotter, other 4 laser printers, 1 HP Fax/Print/Scan/Copy machine. All the clients are XP Sp3 boxes. We would like to: centralize and share 90Gb of files using a Dropbox (this way we will have LAN sync of local working directories + internet backup + access our files wherever we are). centralize our plotter, printers and fax machine backup all the workstations share outlook calendar and tasks run 24x7 saving some energy Of course this setup It's just the first step to a more serious and creative network management of our office, so we are open to new ideas. The budget vary from 400€ to 900€, we are not tech gurus but at least one of us is a power user close to become a geek. I've read some articles on macminicolo about a mac mini either normal or with snow leopard server. I heard about Windows Home Server too on the lifehacker website but I'm in a sort of analysis - paralysis can You help me?

    Read the article

  • Dual-head monitor system Kubuntu 10.04

    - by andrii
    I have a notebook Asus V6X00V with 1400*1050 monitor(name: LVDS) and Dell Monitor 1920*1080 (VGA-0). I want to have a dual monitor system. At MS Windows everything is working fine. During the Kubuntu installation the Dell and the main notebook monitors have a right resolutions(1920*1080 & 1400*1050). But after some stage it have been changed to the 1152*864 for both. Now the right resolution is only during turning off process and when I am using the console. So it shows that system can use this resolutions. The problem is just in a settings. I am using Size & Orientation - System Settings for setting adjustment. Any option that changes resolution for any monitor or changing position(Absolute, Left Of, Right of and so on) cause the color line noise on the screens. I have tried xrandr: xrandr --output LVDS --mode 1400x1050 --pos 0x0 --output VGA-0 --mode 1920x1080 --right-of LVDS --pos 1400x0 but have received the same result. I have find out that for example the previous version of Randr(1.2, now I have xrandr 1.3) need a xorg.conf file modification to create a big virtual screen, but kubuntu 10.4 don't have xorg.conf and I don't know should I modify xorg for 1.3 version of xrandr or not. Please help me to solve this problem

    Read the article

  • MongoDB on 128mb 32-bit VPS (plus Tornado and Redis)

    - by apito
    i am curious about how mongodb will perform in a limited vps. specifically, i'll deploy this configuration on 32-bit ubuntu 9.04 server with 128Mb memory (UPDATE: now i'm considering 360mb too). nginx and redis three instances of tornado apps (one is for mobile site; limited app, not my primary audience); has around 8 Collections. social webapp for my community. mongodb all beside mongodb seems to have small footprint. memory-mapping-wise, i dont know how mongodb will behave. i know it's a little bit a stretch to use this kind of config on a tiny vps, but that's what i can afford for now. i expect to have.. hmm.. maybe ~50 15rps. i did my homework doing a lot of frontend optimizations and yslow says grade A 91 (ruleset V2) :-) anyone willing to share experiences? eg. how big the data set size when mongo hit the ceiling, performance when mongo do a lot of disk IO, etc. thanks. UPDATE: this is my pet project. i'll get back to you when i have next spare time to do same httperf in a vbox with exact spec. suggestion how to do stress testing welcomed. i'm new to this kind of stuff.

    Read the article

  • copy large LVM volume(14TB) from one server to another

    - by bruce
    recently,I have to copy a very large LVM volume()rom server A to server B. Below is the filesystem of server A and server B - server A [root@AVDVD-Filer ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_avdvdfiler-lv_root 16T 14T 1.5T 91% / tmpfs 3.0G 0 3.0G 0% /dev/shm /dev/cciss/c0d0p1 194M 23M 162M 13% /boot /dev/mapper/vg_avdvdfiler-test 2.3T 201M 2.1T 1% /test /dev/sr0 3.3G 3.3G 0 100% /mnt server B [root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-LogVol00 20G 2.5G 16G 14% / tmpfs 3.0G 0 3.0G 0% /dev/shm /dev/cciss/c0d0p1 194M 23M 162M 13% /boot /dev/mapper/VolGroup00-LogVol00 16T 133M 15T 1% /xiangao/lv1 /dev/mapper/VolGroup00-LogVol01 4.7T 190M 4.5T 1% /xiangao/lv2 I want to copy LVM volume /dev/mapper/vg_avdvdfiler-lv_root on server A to LVM volume /dev/mapper/VolGroup00-LogVol00 on server B . The server A and server B is in the same IP segment. IN the LVM volume on server A , there is all average 500M avi wmv mp4 etc. I tried mount /dev/mapper/vg_avdvdfiler-lv_root on server A to server B through NFS , then use cp command copy. It is clear I faild . Because the LVM volume is too big , I do not have good idea . I hope a good solution here. I'm a chinese, my english is very pool. sorry thanks everyone!

    Read the article

  • i7 Windows 7 laptop or Macbook Pro 15" (For making 3D model and Animation)

    - by sppdhs
    Hi everyone, I'm thinking of buying a new laptop as my current one will be 5 years old next year and it currently gives me a blue screen everytime I run a heavy software to make 3D models and animation. Question is: Should I buy Macbook Pro or a Windows 7 laptop? I tried to research and some say that Windows would be better, especially because you can buy a very high spec laptop with cheaper price. While some other say Macbook Pro is the better choice as it can run bootcamp with windows 7 100% performance on every software even though it's a mac. Is this true? Which one is actually better? Btw, the software that I usually use is 3Ds Max, Maya, and ZBrush. To note as well, I have never used a Mac before. I've checked around some stores and budget wise, I'll need around AU$3000+ to buy a Macbook pro, and AU$2000+ to buy a windows laptop. Quite a big difference in price range. Thanks in advance for your help.

    Read the article

  • organizing my music and my itunes

    - by Cawas
    What can we do to organize our music? I've got over 20k items on my iTunes Library, at least 5k with ratings and play counts, apparently just 12k music files and I can't understand how this question have not been properly answered yet. Maybe there is no answer. I have too many duplicates, broken links, bad music, corrupted files... Well, a big mess with no tags! Probably there's no single software capable of just organizing everything, though I'd love one. Hopefully some time in the near future we all will be able to just sync the cloud of our automagically selected music to the newly created offline copy. But meanwhile... Please, do consider I've at least gave a shot (even while not a full test drive) to every single answer linked here already, plus a few more. I'm fine with using other software (mac too, please) to organize, but I'd need it to sync (retrieve and put back) at least iTunes ratings, because of iPhone and smart playlists. Not looking for iTunes replacement. I'm hoping to hear what you hardcore music organizers out there are using as your own solutions! :) I myself am using way too many tools, getting way too little done and end up going song by song.

    Read the article

  • URL autocomplete no longer working in Chrome

    - by Yuji Tomita
    The browser URL autocomplete has started behaving differently starting yesterday. I used to access my top urls by typing the first one or two letters of a URL then pressing enter. Now, I have to visually fish for the right one and push the down arrow to select the url. Big difference. Anybody know if I can get the old functionality back somehow? Have I messed a setting? Example of how my browser used to work: Gmail.com: CMD + L Type G Enter Stackoverflow.com CMD + L Type S Enter Normally, the browser bar would already be highlighted with gmail.com after typing the first g. It would narrow the matches depending on what characters were typed next, or simply go to it if I pressed enter. UPDATE: I just realized my history tab looks suspicious. No entries But clearly Chrome is pulling some data from my history, as I have very personalized recommendations when typing in a letter. UPDATE: Fixed! Saved my bookmarks, removed my ~/Library/Application\ Support/Google/Default directory (careful, it looks like absolutely everything is stored here) restarted chrome, and within one visit to Gmail.com, my autocomplete was filling in my URLs like so: Beautiful.

    Read the article

  • What is the "in-the-wire" size of a ethernet frame? 1518 or 1542?

    - by chrisapotek
    According to the table here, it says that MTU = 1500 bytes and that the payload part is 1500 - 42 bytes or 1458 bytes (<- this is actually wrong!). Now on top of that you have to add IPv4 and UDP headers, which are 28 bytes (20 IP + 8 UDP). That leaves my maximum possible application message to as 1430 bytes! But by looking for this number in the Internet I see 1472 instead. Am I doing this calculation wrong here? All I want to find out is the maximum application message I can send over the wire without the risk of fragmentation. It is definitely not 1500 because that includes the frame headers. Can someone help? The confusion is the the PAYLOAD can actually be as large as 1500 bytes and that's the MTU. So now what is the size in-the-wire for a payload of 1500? From that table it can be as big as 1542 bytes. So the maximum app messages I can send is 1472 (1500 - 20 (ip) - 8 (udp)) for a maximum in the wire size of 1542. It amazes me how things can get so complicated when they are actually simple. And I have not clue how someone came up with the number 1518 if the table says 1542.

    Read the article

  • NVidia TwinView - slow rendering on dual desktop

    - by lisak
    Hey, does anybody have experience with it ? I've set it up 4 times on 4 different machines. And there was always problems with slow rendering ( for instance : scrolling pages in browser is not fluent). But there always was something that finally made it work perfectly... I remember that one time this option helped, but not now Option "RenderAccel" "1" Nvidia geforce 8400GS or Zotac geforce 9500GT Monitors connected via dvi and hdmi connectors proper nvidia driver installed Section "ServerLayout" Identifier "X.org Configured" Screen 0 "Screen0" 0 0 InputDevice "Mouse0" "CorePointer" InputDevice "Keyboard0" "CoreKeyboard" Option "Xinerama" "0" EndSection Section "Files" ModulePath "/usr/lib64/xorg/modules" FontPath "/usr/share/fonts/local" FontPath "/usr/share/fonts/TTF" FontPath "/usr/share/fonts/OTF" FontPath "/usr/share/fonts/Type1" FontPath "/usr/share/fonts/misc" FontPath "/usr/share/fonts/CID" FontPath "/usr/share/fonts/75dpi/:unscaled" FontPath "/usr/share/fonts/100dpi/:unscaled" FontPath "/usr/share/fonts/75dpi" FontPath "/usr/share/fonts/100dpi" FontPath "/usr/share/fonts/cyrillic" EndSection Section "Module" Load "dri2" Load "glx" Load "extmod" Load "record" Load "dbe" EndSection Section "InputDevice" Identifier "Keyboard0" Driver "kbd" EndSection Section "InputDevice" Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/input/mice" Option "ZAxisMapping" "4 5 6 7" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Acer AL1715" HorizSync 30.0 - 83.0 VertRefresh 50.0 - 75.0 EndSection Section "Device" Identifier "Nvidia" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "MSI big bang-fuzion" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce 8400 GS" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "RenderAccel" "1" Option "AllowGLXWithComposite" "1" Option "TwinView" "1" Option "TwinViewXineramaInfoOrder" "DFP-1" Option "metamodes" "CRT: 1280x1024 +1920+0, DFP: 1920x1080 +0+0" SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • How to minimize the risk of employees spreading critical information?

    - by Industrial
    Hi everyone, What's common sense when it comes to minimising the risk of employees spreading critical information to rivalling companies? As of today, it's clear that not even the US government and military can be sure that their data stays safely within their doors. Thereby I understand that my question probably instead should be written as "What is common sense to make it harder for employees to spread business critical information?" If anyone would want to spread information, they will find a way. That's the way life work and always has. If we make the scenario a bit more realistic by narrowing our workforce by assuming we only have regular John Does onboard and not Linux-loving sysadmins , what should be good precautions to at least make it harder for the employees to send business-critical information to the competition? As far as I can tell, there's a few obvious solutions that clearly has both pros and cons: Block services such as Dropbox and similar, preventing anyone to send gigabytes of data through the wire. Ensure that only files below a set size can be sent as email (?) Setup VLANs between departments to make it harder for kleptomaniacs and curious people to snoop around. Plug all removable media units - CD/DVD, Floppy drives and USB Make sure that no configurations to hardware can be made (?) Monitor network traffic for non-linear events (how?) What is realistic to do in a real world? How does big companies handle this? Sure, we can take the former employer to court and sue, but by then the damage has already been caused... Thanks a lot

    Read the article

  • How to connect AD Explorer from Sysinternals to Global Catalog

    - by Oliver
    I'm using the sysinternals AD Explorer quite frequently to search and inspect an Active Directory without any big problems. But now i'd like to connect not only to a single AD Server. Instead i like to inspect the global catalog. If i enter within the AD Explorer connect dialog only the dns name of the machine (e.g. dns.to.domain.controller) that is serving the global catalog i only receive the concrete domain for which it is responsible, but not the whole forest (that's normal behaviour and expected by me). If i'm going to add the default port number (3268) for the global catalog in the form dns.to.domain.controller:3268 AD Explorer will simply crash without any further message. The global catalog itself works as expected under the given name and port number, cause our apache server use exactly this address and port number to authenticate some users. So any hints or tips to access the global catalog out of AD Explorer? Or there are any other nice tools like AD Explorer out there that doesn't have any problems to access the global catalog?

    Read the article

  • How do I lower the hardware volume? (volume too high)

    - by Zom-B
    I have a 4yo Dell laptop with Windows XP Pro (modern ones unfortunately don't have a physical volume knob), and lately I'm using my Apple earphones, because they have much better low frequency response than my $10 earphones. They also have the side effect of being much louder. To give an indication of my agony, for most tasks (movie, music, games) I have my main volume at 3 ticks: drag to 0 with the mouse and press the up key 3 times (the handle does not even raise 1 pixel) and my wave volume at 50%. I notice that when I do this, I have a lot of digital noise, because I'm using just a tiny fraction of the 16-bit space. If I drag the Wave slider down until I barely hear the audio, it becomes really distorted and noisy, indicating that this is digital volume (in the DirectSound driver or something) and not hardware volume. I experimented in Audition. When I make a tone of 1000Hz at -50db, (all windows volumes at max) the volume is just below my pain threshold. When I zoom in to see how high the sample values reach, I see that just 8 of the 16 bits are used (about -100 ~ 100). When I generate such tone at -80db (minimum I can specify) then I can still clearly hear the tone, although really noisy. When I zoom in, I see that just 3 out of 16 bits are used. I created a squarewave tone that is just 1 bit high, and I can still hear it! For most uses, this is not a big problem (audiophiles will disagree!), as I just have more noise than usual (about the same as old 8 bit hardware), but I'm also in the process of programming a hearing test program, in which case this problem is a death blow as the test subjects will even hear tones at the bottom of the theoretical range (lowering the windows volume is futile, see above) (I cannot update drivers, as Dell has discontinued XP support for my model)

    Read the article

  • Inbox not updating in Exchange 2010, all users affected

    - by TuxMeister
    I'm battling against this darn issue this morning. We have the following setup: Big Hyper-V machine hosting the servers as VM's VM for CAS: WEB.XXX.local VM for Mailbox: EXC.XXX.local Servers are running Server 2008 R2 with Exchange 2010 SP1 Clients are all running Windows 7 Pro x64 with Outlook 2010 x64 The problem we're having is that nobody is able to see any emails received today (16th of October), but they are able to send externally. When I reply back to the email received externally, I don't get an NDR, yet the user cannot see my email. This is what I found and tried thus far: If we create a subfolder in Outlook 2010 and move any email from the inbox into that folder, changes will be immediately reflected in OWA We've been sending test emails to other users internaly and external email addresses and the sent items folder contains all those tests, synced properly to OWA as well Have tried crating a new profile, new emails are still missing Tried disabling Cache Mode, still no luck Also disabled "Download shared folders", still no luck Tried to setup a brand new Exchange mailbox and configured it on a VM that never had Outlook on it, still the same issue Tried restarting Exchange services on both CAS and Mailbox servers, no luck Tried rebooting both CAS and Mailbox servers, still no luck Performed a Mailbox Discovery on my admin account, emails from today are being found in the Discovery results, so the stuff is there, just not updating the user inboxes Any idea about what this hellish thing can be? I've done everything I can think of and also everything I could find out there. Let me know if you need any more details and thanks for reading this!

    Read the article

  • How do you backup your own files? [on hold]

    - by Antonis Christofides
    I'm a system administrator and I use rsnapshot to backup some servers, duplicity for some others. Both work fine, each one with advantages and disadvantages. Despite that, I am at a loss on how to backup my own private files. I'd use duplicity to automatically backup my files to a remote server; but the problem is that once in a while I must do a full backup. My emails and important files are 9G, and I expect this to increase. Uploading through aDSL at 1Mbit would be 20 hours. Too much. rsnapshot doesn't require periodic full backups (only the first time), but it must be running on the remote server and have a means to connect to my computer; if the server is compromised (or simply if the NSA decides to use it), my own machine is also compromised. Not good. The only solution I've come up with is use encfs, use unison to synchronize the files to a remote server, and use duplicity or rsnapshot on the remote server to backup these files. In that case, the question is whether I can sync the files on many computers; is it possible for encfs to be used with the same key on many computers? I also think that if I append one character to the unencrypted file, its encrypted encfs counterpart might change a lot, so that incrementals with duplicity would be less efficient—but not a big deal. Maybe also, when I need to restore a file, finding the correct file to restore could be a pain, because of filename encryption. I wonder whether there is any other possibility that I've overlooked. Maybe I'm asking too much for my personal use, and I should settle with an external disk?

    Read the article

  • Unable to resize ec2 ebs root volume

    - by nathanjosiah
    I have followed many of the tutorials that pretty much all say the same thing which is basically: Stop the instance Detach the volume Create a snapshot of the volume Create a bigger volume from the snapshot Attach the new volume to the instance Start the instance back up Run resize2fs /dev/xxx However, step 7 is where the problems start happening. In any case running resize2fs always tells me that it is already xxxxx blocks big and does nothing, even with -f passed. So I start to continue with tutorials which all basically say the same thing and that is: Delete all partitons Recreate them back to what they were except with the bigger sizes Reboot the instance and run resize2fs (I have tried these steps both from the live instance and by attaching the volume to another instance and running the commands there) The main problem is that the instance won't start back up again and the system error log provided in the AWS console doesn't provide any errors. (it does however stop at the grub bootloader which to me indicates that it doesn't like the partitions(yes, the boot flag was toggled on the partition with no affect)) The other thing that happens regardless of what changes I make to the partitions is that the instance that the volume is attached to says that the partition has an invalid magic number and the super-block is corrupt. However, if I make no changes and reattach the volume, the instance runs without a problem. Can anybody shed some light on what I could be doing wrong? Edit On my new volume of 20GB with the 6GB image,df -h says: Filesystem Size Used Avail Use% Mounted on /dev/xvde1 5.8G 877M 4.7G 16% / tmpfs 836M 0 836M 0% /dev/shm And fdisk -l /dev/xvde says: Disk /dev/xvde: 21.5 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x7d833f39 Device Boot Start End Blocks Id System /dev/xvde1 1 766 6144000 83 Linux Partition 1 does not end on cylinder boundary. /dev/xvde2 766 784 146432 82 Linux swap / Solaris Partition 2 does not end on cylinder boundary. Also, sudo resize2fs /dev/xvde1 says: resize2fs 1.41.12 (17-May-2010) The filesystem is already 1536000 blocks long. Nothing to do!

    Read the article

  • How to secure an Internet-facing Elastic Search implementation in a shared hosting environment?

    - by casperOne
    (Originally asked on StackOverflow, and recommended that I move it here) I've been going over the documentation for Elastic Search and I'm a big fan and I'd like to use it to handle the search for my ASP.NET MVC app. That introduces a few interesting twists, however. If the ASP.NET MVC application was on a dedicated machine, it would be simple to spool up an instance of Elastic Search and use the TCP Transport to connect locally. However, I'm not on a dedicated machine for the ASP.NET MVC application, nor does it look like I'll move to one anytime soon. That leaves hosting Elastic Search on another machine (in the *NIX world) and I would probably go with shared hosting there. One of the biggest things lacking from Elastic Search, however, is the fact that it doesn't support HTTPS and basic authentication out of the box. If it did, then this question wouldn't exist; I'd simply host it somewhere and make sure to have an incredibly secure password and HTTPS enabled (possibly with a self-signed certificate). But that's not the case. That given, what is a good way to expose Elastic Search over the Internet in a secure way? Note, I'm looking for something that hopefully, will not require writing code to provide shims for the methods that I want (in other words, writing forwarders).

    Read the article

< Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >