Search Results

Search found 12611 results on 505 pages for 'matlab figure'.

Page 356/505 | < Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >

  • PCI Tv Tuner not receiving signal

    - by C-dizzle
    I inherited an Angel II PCI Dual TV Tuner card, popped it into my computer's PCI slot, plugged in my coax cable, but am having trouble getting any type of signal to it. The computer I am using is Windows 7 Ultimate 32 bit, I know the card works because the computer recognizes it and installs the drivers, or at least I assume it works because of that. I'm trying to use Windows Media Center to watch/record tv. Here are a few things I have tried to get it working: Uninstalled/Reinstalled the newest drivers Tried hooking up an antenna to the coax input instead of my cable Instead of using cable splitter, went directly from wall output into card with coax cable Tried using different output on splitter in case the out port was bad I haven't tried a different coax cable yet, it should be fine since it's pretty new. Since this is my first time setting up a TV Tuner card, is there anything specific that I need to do with it? Is there any configuration that needs to be done on it? Do I need to have a digial receiver? I was getting pretty frustrated with it so I wanted to turn here to the experts, I'm sure someone can help me figure it out.

    Read the article

  • Dead Linux server - need help and options

    - by Choi S.
    All, I have a Dell PE 1950 w/ 2 SATA drives in a software RAID1. OS is CentOS 5.5 (2.6.18.x). Starting this afternoon we received HW errors (something on the bus is bad, E171F) and the machine became unresponsive. We hard booted and it came back up for about 5 hours but then it happened again. I'm trying to figure out our options. Unfortunately we do not have similar hardware but I have a small desktop that I can use. I was contemplating putting one of the drives into the desktop and then starting it up. My goal was to then P2V it using Vmware converter but apparently the free v5.x doesn't support hot cloning/converting on a RAID volume, only the Enterprise 4.x version of Converter does. My questions are: 1.) Is putting a single drive out of a RAID1 pair into another piece of HW is safe? Based on my research and understanding it appears to be but would like confirmation. 2.) Is there any work around to the Vmware Converter not supporting RAID volumes during a hot clone/convert session? 3.) Are there other options I'm overlooking? Thanks in advance for reading and responding. --Choi S.

    Read the article

  • Mac OS X will only upload zero-byte files through FTP

    - by tabacitu
    I'm using Mac OS X Lion and i've been having this problem with FTP (any FTP client, mind you. I tried Transmit, FileZilla, Cyberduck and the Terminal, all with the same result) I can browse files in my FTP Client, but when I upload files, the client hangs for a few seconds, then thinks it uploaded the files successfully, but it only creates a new file with one blank line in it. Sometimes, it manages to upload 4-5 lines. It then returns: 226 - Error during read from data connection 226 Transfer aborted But 2xx is a success message. It is not a server issue, since any Windows machine will upload just fine using the same network. Can anybody figure out what the problem is? It renders my mac useless for web development. The problem persists with SFTP and FTP with SSL/TLS. Later edit: Solved! Ok, turns out the problem goes away when I take out my router and connect directly through PPPoE. So the problem is with the router, I thought. But no, the problem is with the mac that connects through a router that connects through a PPPoE and tries to upload using FTP. Pretty specific, I know. The problem is with the MTU (maximum transmission unit). Apparently, mac os x breaks the file into chunks that are too large for the router to send, because the router's MTU was set lower than Mac OS X's. My router's was 1492, which is ok, but my Mac's MTU was 1500, which is unacceptable. I don't even understand why it works directly with PPPoE. Anyway, if you encounter the same problem, this is how you diagnose and fix it: In terminal, run: ifconfig | grep mtu to see what the MTU is for en0 (or en1, mine was en0) If it's 1500, run sudo ifconfig en0 mtu 1300 This should solve it. If so, it may only be until the next restart. You can also change the MTU in System Preferences \ Network \ Ethernet - Advanced \ Hardware

    Read the article

  • Recovering ZFS pool with errors on import.

    - by Sqeaky
    I have a machine that had some trouble with some bad RAM. After I diagnosed it and removed the offending stick of RAM, The ZFS pool in the machine was trying to access drives by using incorrect device names. I simply exported the pool and re-imported it to correct this. However I am now getting this error. The pool Storage no longer automatically mounts sqeaky@sqeaky-media-server:/$ sudo zpool status no pools available A regular import says its corrupt sqeaky@sqeaky-media-server:/$ sudo zpool import pool: Storage id: 13247750448079582452 state: UNAVAIL status: The pool is formatted using an older on-disk version. action: The pool cannot be imported due to damaged devices or data. config: Storage UNAVAIL insufficient replicas raidz1 UNAVAIL corrupted data 805066522130738790 ONLINE sdd3 ONLINE sda3 ONLINE sdc ONLINE A specific import says the vdev configuration is invalid sqeaky@sqeaky-media-server:/$ sudo zpool import Storage cannot import 'Storage': invalid vdev configuration I should have 4 devices in my ZFS pool: /dev/sda3 /dev/sdd3 /dev/sdc /dev/sdb I have no clue what 805066522130738790 is but I plan on investigating further. I am also trying to figure out how to use zdb to get more information about what the pool thinks is going on. For reference This was setup this way, because at the time this machine/pool was setup it needed certain Linux features and booting from ZFS wasn't yet supported in Linux. The partitions sda1 and sdd1 are in a raid 1 for the operating system and sdd2 and sda2 are in a raid1 for the swap. Any clue on how to recover this ZFS pool?

    Read the article

  • Sharing music on NAS with Zune and iPod?

    - by osij2is
    After being a long time iPod owner, I'm switching to the new Zune with its subscription model. I haven't bought a Zune yet but I'm planning on doing so within the next month or so. I have approximately 40GB worth of music and my girlfriend has her iPod music library around 30GB. I've been trying to figure out how to migrate all our music off of our laptops/desktops and centralize everything on my NAS. In sharing iPod music isn't too bad. Sharing from one machine to all is fairly easy within the iTunes player. As far as storing all the music on a NAS, again, iPods aren't too bad and imagine other systems aren't difficult. But I'm really new to the Zune and I'm beginning to run into some issues. My questions are: Is it possible to store all music from our iPods and Zune subscriptions and share music between the iPod/Zune within the same file share on my NAS? I'm sure it's possible to store music on a share, but I'm not sure how iTunes and the Zune software differs. Is there 3rd party software, maybe something like DoubleTwist that can sync based from NAS to multiple desktop/laptops? I've never used DoubleTwist but it's something that I found that looks close to being what I need. I've never quite done this myself so I'm trying to find a solution that can: a) store music on a network share; b) sync between different devices (Zune/iPod) seamlessly.

    Read the article

  • tftpd starts randomly

    - by Mutant
    A few days ago my Little Snitch filter starts popping up tftpd. I'd never seen this before, so I immediately start freaking out thinking my Mac has been compromised. I can't find anything unusual on the system. The process usually dies before I can trace it (little snitch never allowed the connection just left the popup up). I finally caught it once, and found this: [10:32]: sudo lsof -nlP | fgrep tftp Password: tftpd 1924 18446744 cwd DIR 1,3 1326 2 / tftpd 1924 18446744 txt REG 1,3 29856 163979456 /usr/libexec/tftpd tftpd 1924 18446744 txt REG 1,3 600576 163686622 /usr/lib/dyld tftpd 1924 18446744 txt REG 1,3 303300608 189014898 /private/var/db/dyld/dyld_shared_cache_x86_64 tftpd 1924 18446744 0u IPv4 0x34a76100fcbb06e3 0t0 UDP *:55818 tftpd 1924 18446744 2u IPv4 0x34a76100f1113c53 0t0 UDP *:69 [10:32]: ps ax | fgrep 1924 1924 ?? S 0:00.00 /usr/libexec/tftpd -i /private/tftpboot 1949 s000 S+ 0:00.00 fgrep 1924 For the life of me I can't figure out what is starting this. Nothing in cron, launchdaemons, etc. Google searches haven't yielded much either. The connection IP is different each time. So my question is: Has anyone seen anything like this before?

    Read the article

  • Servers/Websites Keep Going Down

    - by Tyler Johnson
    Okay, I'm a noobie. I know how to build and compose a website, but I have no idea what I'm doing when it comes to servers and server commands, etc. I've recently had a problem with all of my sites on our servers going down all at once and then I have to go in and reboot the server for them to come up again. At first this was annoying, but now it is becoming agonizing as it now takes 3-4 reboots for the websites to come back up. I contacted support for my hosting, but they are not being very helpful. They just keep telling me what the issue might be and basically telling me that I'm going to have to look into it and figure it out, which really isn't possible since I know nothing. Anyway, here are the things they said were possible reasons: They said I have "strange logs" in my Apache webserver log, error: sh: fetch: command not found. My php.ini memory limit is: 256M which is very high. It should be 32M or 64M. Server is reaching Max Clients, meaning we have more than 150 visitors at a time. (They supposedly "fixed" this, but the sites/server are still going down) I have some Wordpress sites with plugins getting errors like: PHP Warning: pack(): Type H: illegal hex digit G in... PHP Fatal error: Cannot use object of type stdClass as array in... PHP Fatal error: Maximum execution time of 30 seconds exceeded in... PHP Fatal error: Call to undefined function file_exists() in... PHP Parse error: syntax error, unexpected '<' I know that's a lot, but I really am at wits end and have no idea what to do now. If anyone could maybe give me some advice or point me in the right direction I would greatly appreciate it! Thanks! Oh, and here are the specs for my server: RAM: 2048MB CPU Shares: 40 Primary Disk: 50GB Data Transfer: 75GB Port Speed: 5Mbps Type: Linux

    Read the article

  • finding files that match a precise size: a multiple of 4096 bytes

    - by doub1ejack
    I have several drupal sites running on my local machine with WAMP installed (apache 2.2.17, php 5.3.4, and mysql 5.1.53). Whenever I try to visit the administrative page, the php process seems to die. From apache_error.log: [Fri Nov 09 10:43:26 2012] [notice] Parent: child process exited with status 255 -- Restarting. [Fri Nov 09 10:43:26 2012] [notice] Apache/2.2.17 (Win32) PHP/5.3.4 configured -- resuming normal operations [Fri Nov 09 10:43:26 2012] [notice] Server built: Oct 24 2010 13:33:15 [Fri Nov 09 10:43:26 2012] [notice] Parent: Created child process 9924 [Fri Nov 09 10:43:26 2012] [notice] Child 9924: Child process is running [Fri Nov 09 10:43:26 2012] [notice] Child 9924: Acquired the start mutex. [Fri Nov 09 10:43:26 2012] [notice] Child 9924: Starting 64 worker threads. [Fri Nov 09 10:43:26 2012] [notice] Child 9924: Starting thread to listen on port 80. Some research has led me to a php bug report on the '4096 byte bug'. I would like to see if I have any files whose filesize is a multiple of 4096 bytes, but I don't know how to do that. I have gitBash installed and can use most of the typical linux tools through that (find, grep, etc), but I'm not familiar enough with linux to figure it out on my own. Little help?

    Read the article

  • Possible disk IO issue

    - by Tim Meers
    I've been trying to really figure out what my IOPS are on my DB server array and see if it's just too much. The array is four 72.6gb 15k rpm drives in RAID 5. To calculate IOPS for RAID 5 the following formula is used: (reads + (4 * Writes)) / Number of disks = total IOPS. The formula is from MSDN. I also want to calculate the Avg Queue Length but I'm not sure where they are getting the formula from, but i think it reads on that page as avg que length/number of disks = actual queue. To populate that formula I used the perfmon to gather the needed information. I came up with this, under normal production load: (873.982 + (4 * 28.999)) / 4 = 247.495. Also the disk queue lengh of 14.454/4 = 3.614. So to the question, am I wrong in thinking this array has a very high disk IO? Edit I got the chance to review it again this morning under normal/high load. This time with even bigger numbers and IOPS in excess of 600 for about 5 minutes then it died down again. But I also took a look at the Avg sec/Transfer, %Disk Time, and %Idle Time. These number were taken when the reads/writes per sec were only 332.997/17.999 respectively. %Disk Time: 219.436 %Idle Time: 0.300 Avg Disk Queue Length: 2.194 Avg Disk sec/Transfer: 0.006 Pages/sec: 2927.802 % Processor Time: 21.877 Edit (again) Looks like I have that issue solved. Thanks for the help. Also for a pretty slick parser I found this: http://pal.codeplex.com/ It works pretty well for breaking down the data into something usable.

    Read the article

  • Windows 7 SSH file server

    - by Siriss
    Hello all- I have looked at the other posts, but have not quite found an answer I have a question about windows file sharing over SSH. I have copssh installed and it is working for Remote desktop connections. I have port 22 forwarded on my router etc. I connect from a Mac or Putty with this address: ssh -l copsshusername 3391:localhost:3389 [external ip] That works fine. I would like to configure Windows 7 to allow my ssh account that I use to login, access to certain shared folders. I have documents and videos and things that I would like to be able to download externally. I have done this before on Linux and a long time ago on XP, but I cannot figure out what I am missing on Windows 7. There is a designated SSH user that copssh uses to run the service and that I use to to login as. I have googled and googled and have not found a solution that does everything I need that is why I am turning here for ideas. I hope I am explaining this correctly. Thank you very much for your help!

    Read the article

  • New PC: win7 install issues with 2x3TB in RAID1

    - by goober
    Background / Components Not sure what I've done wrong. Built one PC before successfully in a similar way but this one seems to be struggling. I have the following components: ASUS P8Z68-V/Pro Gen 3 (updated to latest firmware) 16GB (2x8GB) RAM corsair hx850 power supply 2x3TB drives on the intel z68 controller 1x128GB SSD on the Marvel controller sapphire 7950 The problem I set up my 3 TB disks in RAID1 controller appears to recognize them fine during boot as one 2.7TB raid1 volume windows setup sees two disks, both 746 GB, but will only let me install to one and appears to work fine. windows appears to install fine after installer reboots, I receive "windows failed to start" error referencing code 0xc000000e and "\Windows\system32\winload.exe every time I do an install, a new additional "win7" entry is added to the boot menu; all lead to this error. What I've tried: updated the BIOS to the latest firmware attempted to repair the install tried clearing / removing raid / re-raiding drives tried formatting the drives during install attempted to clear the menu Of entries (can't figure out how to do that) No matter how many times I destroy the raid array, format the disks, etc. the boot entries keep piling up. Any idea where I'm going wrong?

    Read the article

  • Which program is locking all my executable files?

    - by Tom Wijsman
    When updating any software product, as well as manually trying to replace .exe files, it says that access is denied to the file and in fact the System process is holding a handle to the file when I check it with Process Explorer. This must be a driver or something that is malfunctioning was my first though, but now I wonder how I figure out which driver / program is doing this and why it is so. Unlocker doesn't seem to be working for me, unless someone can tell me how to use it properly other than making it appear a magical wand in the notification area.... This is what Unlocker puts in my event log: The description for Event ID 1060 from source Application Popup cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer. If the event originated on another computer, the display information had to be saved with the event. The following information was included with the event: \??\C:\Program Files (x86)\Unlocker\UnlockerDriver5.sys the message resource is present but the message is not found in the string/message table Upon searching event 1060 I get: <file name> has been blocked from loading due to incompatibility with this system. Perhaps it is because I have 64 bit?

    Read the article

  • What hardware would I need (approx) to run ESXi server?

    - by mr.b
    Hi, I am considering to purchase off-the-shelf commodity hardware in order to build server that will host virtual machines using ESXi server. Intended purpose for this server is NOT mission critical tasks. It will have to run perhaps 20-50 Windows XP/Vista/7 virtual machines (in total, but closer to 20 figure). Each guest would have to have 1-2 GB of ram, and probably two-three times more disk space than guest OS needs with clean install and all updates applied (that would be around 6-8 GB for XP, and i believe closer to 10-15 for win7). Those guests will act as a test ground for a new product that is network management software, thus guests will idle most of their time once initially loaded, but if I give them some task to complete, they should be able to perform reasonably well. Now, from what I have learned... CPU is usually not much of an issue (6 cores would do it), memory should not be lacking, but doesn't have to be sum of all guests, because of overcommitment... That leads me to IO, which is, as it seems, the bottleneck. Since I have very little experience with ESXi (and ESX, too) server, I'd like to ask: How much memory could I save by overcommitment, and how does it affect performance? Is 6-core cpu enough to run above described system? Would it be possible to run entire server off two (or even one) SSD drives (to host system virtual disks, with few additional HDDs (2-3) in RAID 0 to be used as secondary storage? I read somewhere that ESXi allows having something like "master image", essentially virtual machine that is "deployed" many times, so that disk space can be saved by having only differences stored by specific guests, instead of copying around whole virtual disks. Is this true, and how can this help me? Are there any other things I need to take into consideration when building this off-the-shelf solution? I should probably mention here that I'm fully aware of issues like SPOF regarding power supply, raid 0, etc, but since it's only a testing ground and not a production system, it's not so important for me. Thanks, B.

    Read the article

  • 100% uptime for a web application

    - by Chris Lively
    We received an interesting "requirement" from a client today. They want 100% uptime with off-site failover on a web application. From our web application's viewpoint, this isn't an issue. It was designed to be able to scale out across multiple database servers, etc. However, from a networking issue I just can't seem to figure out how to make it work. In a nutshell, the application will live on servers within the client's network. It is accessed by both internal and external people. They want us to maintain an off-site copy of the system that in the event of a serious failure at their premises would immediately pick up and take over. Now we know there is absolutely no way to resolve it for internal people (carrier pigeon?), but they want the external users to not even notice. Quite frankly, I haven't the foggiest idea of how this might be possible. It seems that if they lose Internet connectivity then we would have to do a DNS change to forward traffic to the external machines... Which, of course, takes time. Ideas? UPDATE I had a discussion with the client today and they clarified on the issue. They stuck by the 100% number, saying the application should stay active even in the event of a flood. However, that requirement only kicks in if we host it for them. They said they would handle the uptime requirement if the application lives entirely on their servers. You can guess my response.

    Read the article

  • Can't remote into Virtual PC

    - by Spamela
    I used to be able to remote into my Virtual PCs. It has been working for at least a year. Yesterday just stopped working... I cannot figure it out... Things I have triple-checked: 1. My Virtual PCs have "Allow Remote Access" checked. 2. My Virtual PCs have an account in the Administrator group that is password protected. 3. My Host's entry in the registry for the Terminal Services Port is still the default of 3389. So here is the strange thing. I can't even remote into the Virtual PC from it's host much less another PC... From the host, I can ping the Virtual PC and get a response but when trying to remote into it from the host I get the following error: Remote Desktop can't connect to the remote computer for one of these reasons: 1)Remote access to the server is not enabled. 2)The remote computer is turned off 3)The remote computer is not available on the network My host is running Windows 7. Virtual PCs are running XP. Thank you for looking at this!

    Read the article

  • Whats the difference between pulling from a branch into master and pushing that branch onto master?

    - by Justin808
    In Tortoisegit, on the repository, I right-click and select sync. At the top of the dialog there are options for Local Branch and Remote Branch. If the local branch is named DeveloperA and the remote branch is master and I do a push, what happens? If the local branch is master and remote branch is DeveloperA and I Pull, what happens? If I am on the master branch and right click, select Merge and change the From to be my DeveloperA branch, what happens? If I try to push from master to remote master and the remote is updated git stops and tells me to pull. It seems if I push from DeveloperA to master it doens't stop, it just clobbers, it that correct? We're having an issue using git where the remote master branch gets clobbered at times and we are trying to figure out why. For example there is a developer working on his DeveloperA branch. He'll pull from master to get any updates, then push to master to push out his changes. But there are times that the push lists more files in the Out Commit list than he's edited. The odd thing is he can't revert those files as git is saying they are up to date and have not been modified. Yet when he pushes git pushes the files out. The problem is if there are changes between his pull and push the changes get clobbered.

    Read the article

  • Apache2.2 not responding or logging anything on Win 7

    - by Adam
    I'm having some trouble with Apache2.2 on Windows 7. For over a year it's been running no problem, but all of a sudden requests have just stopped responding. They don't time out as such, the browser just keeps on waiting forever. Nothing is recorded in either the error log (set to debug level), the access log, or Windows' Event Log. The problem showed up when I added a new VHost and restarted, however a syntax check has shown there's no problem with the config (from the little I changed), and the service does actually start error free. I've also disabled VHosts and tried with just localhost. I've tried to telnet to the web server, and it connects, but nothing happens. The prompt just goes blank and I can't type anything, and effectively become stuck. I've ensured there's a rule within Windows Firewall for Apache, and I've even disabled the entire thing just to check it wasn't the cause. Still the same. If I stop Apache however, the request fails immediately. I've uninstalled and reinstalled Apache, in the hope it might magically fix something using the default config, but still no joy. I've tried using a different port but nothing different. Does anybody have any suggestions to fix this? Or to perhaps try and figure out either if it's Apache itself not responding or something sitting between the two that's holding things up? I'm not too savvy on debugging Windows issues like this and I've been searching for hours but not found anything of use to me. Cheers Adam

    Read the article

  • Can ZFS ACL's be used over NFSv3 on host without /etc/group?

    - by Sandra
    Question at the bottom. Background My server setup is shown below, where I have an LDAP host which have a group called group1 that contains user1, user2. The NAS is FreeBSD 8.3 with ZFS with one zpool and a volume. serv1 gets /etc/passwd and /etc/group from the LDAP host. serv2 gets /etc/passwd from the LDAP host and /etc/group is local and read only. Hence it doesn't not know anything about which groups the LDAP have. Both servers connect to the NAS with NFS 3. What I would like to achieve I would like to be able to create/modify groups in LDAP to allow/deny users read/write access to NFS 3 shared directories on the NAS. Example: group1 should have read/write to /zfs/vol1/project1 and nothing more. Question The problem is that serv2 doesn't have a LDAP controlled /etc/group file. So the only way I can think of to solve this is to use ZFS permissions with inheritance, but I can't figure out how and what the permissions I shall set. Does someone know if this can be solved at all, and if so, any suggestions? +----------------------+ | LDAP | | group1: user1, user2 | +----------------------+ | | | |ldap |ldap |ldap | v | | +-----------+ | | | NAS | | | | /zfs/vol1 | | | +-----------+ | | ^ ^ | | |nfs3 |nfs3| v | | v +-----------------------+ +----------------------------+ | serv1 | | serv2 | | /etc/passwd from LDAP | | /etc/passwd from LDAP | | /etc/group from LDAP | | /etc/group local/read only | +-----------------------+ +----------------------------+

    Read the article

  • Can't do anything with Ext. HDD, ".Trashes" is probably the boogyman. (On a MAC!)

    - by Sander Schaeffer
    I bought a external Harddrive today, A Samsung. But I'm not able to do anything with it A few notes on that. I can't put anything on the harddrive. It keeps on 'preparing copying files' I can delete anything on the harddrive system files, except the folder ".Trashes". It gives error 'Unexpected error: -50' I've tried to empty my own trashcan, no changes. I've set the file permission on the .Trashes to read/write everyone, doesn;t change a thing Trying to format the whole drive with DiskUtility, but quits at start, because the drive cannot be deactivated I've tried a few terminal commands sudo -s -r rf /Volumes/Untitled\ 1/.Trashes - Directory not empty -r rf /Volumes/Untitled\ 1/.Trashes - no permissions Also cd /Volumes ls -al cd name_of_partition ls -al -rm -rf .Trashes Again: Permission error. Also: I can't change drive permissions via Disk Utility, via the button 'recover drive permissions', because it is 'blank' I really can't figure out how to delete .Trashes, format the drive or get the damn thing working. Any suggestions? p.s. If this is the wrong Stack Exchange site: Please redirect me!

    Read the article

  • Find which files an apache process is writing to?

    - by Haluk
    We have this apache process which becomes io-bound time to time. Using atop, we can see it is a write operation. Using lsof -p <PID> we can see a list of files open by the httpd process. First we thought "log" files must be the problem. So we turned them off just to test. However write operations still continues. We will continue testing a few other things. For instance we use php session variables a lot. Maybe php session files are getting all the writing. But is there a way to quickly identify files which get written to by the httpd process? This way we can focus our efforts on those files. UPDATE: We used the strace command as suggested. Here are two lines from the output. write(23, "\27\0\0\0\3SET CHARACTER SET utf8", 27) = 27 write(23, "\17\0\0\0\3SET NAMES utf8", 19) = 19 We do not have a mysql process on this server. So is strace also showing what is being written to an ethernet port? UPDATE2: During high io load, the process which consumes most of the write resources gives the following output to strace -e trace=write -p <PID>: --- SIGCHLD (Child exited) @ 0 (0) --- write(9, "!", 1) = 1 write(19, "OPTIONS * HTTP/1.0\r\nUser-Agent: Apache (internal dummy connection)\r\n\r\n", 70) = 70 However I cannot figure out where these are being written to.

    Read the article

  • Advice on resizing 1280*720 for web audiences.

    - by jamiethompson90
    Forgive my spelling, I'm posting this from my mobile. I've recently decided to record videos to help teach a visual language. My camera likes to boast it can record in 1280; its a cheap camera about £75 so the quality isn't amazing. But its okay. Anyways, it has some other settings for lower res, but I figure might as well record in a larger size in case the need arises for a bigger source file in the future. I've been looking at jw player to play the converted files (mp4 to flv I think). What do you think a good size would be to convert to? I want to to look nice and clear remembering it is a visual language so lip patterns, facial expressions, body movement, fingers etc are all important, sound is not that important but I would like to have a choice to toggle captions. Thanks for any help, any advice apreciated, first time I have done a video project! P.s. If anyones interested its BSL. Jamie

    Read the article

  • umask seems to vary by user

    - by paullb
    I've got a development Ubuntu system for which I have several users: myself (with full sudo) and about 5 other users. (I've set up the system so everything in this respect is still at its default setting) I'm trying to set the system up so that multiple people can collaborate in a single directory by using grouing and I want the default permissions to be 664. However when some users edit files the permissions were 644. After a lot of investigating most users have a umask (checked at the prompt) of 0002 and when they create files they are 664 (as expected) but there are 2 (myself and one other) who have 0022 umask (so the files that come out are 644 and nobody else can write to them). I've looked everywhere but can't figure out why a couple users wind up with a different umask e.g. there is nothing the .bash_profile or anything like that) Any ideas for the source of the discrepancy? /etc/bashrc if [ $UID -gt 199 ] && [ "`id -gn`" = "`id -un`" ]; then umask 002 else umask 022 fi /etc/profile if [ $UID -gt 199 ] && [ "`id -gn`" = "`id -un`" ]; then umask 002 else umask 022 fi EDIT: My (bad) ~/.bashrc # .bashrc # Source global definitions if [ -f /etc/bashrc ]; then . /etc/bashrc fi # User specific aliases and functions export LANG=en_US.utf8 Other user (good) .bashrc # .bashrc # Source global definitions if [ -f /etc/bashrc ]; then . /etc/bashrc fi # User specific aliases and functions

    Read the article

  • Windows 7 Explorer: how to show total size of all files in current folder?

    - by matt wilkie
    In Windows XP Explorer one can turn on Status Bar which shows, among other things, the total size of all the files in the current folder, or if the cumulative size of the selected files. How do I get the same at-a-glance information in Windows 7? Selecting files doesn't count as it stops after 15 files, and it's rare that I'm concerned about total size with that few files (it's pretty easy to estimate in my head). thanks. UPDATE: Information derived from the context menu (select r-click properties) isn't "at a glance", and not as smooth as selecting files and clicking the details link at the bottom in any case. Thank you for fleshing out more of the available routes though. Yes Q19232 is similar to this one, though it is not a duplicate. That question is about looking for easy free-space on disk stats and this one is easy used-space by contents of this folder stats. The answer for both is the same though. You can't! Hopefully someone will figure how to get this lost feature back with a shell extension or something.

    Read the article

  • why would resetting the Netgear N300 router fix my Win 7 laptop's slow wifi?

    - by rjnagle
    In the past day the wifi download speeds of my Win 7 HP 64 bit laptop have slowed considerably. I am trying to troubleshoot the problem and to figure out whether it's hardware related (i.e., is the Intel(R) Centrino(R) Wireless-N 2230 the problem?) or router related. I have a Netgear N300 router connected to my modem. I'm using Speedtest to measure my speed. First, during my problem state, my ipad can download and upload at normal speeds. It's only my Win 7 laptop which is having problems. Because my ipad downloads at normal speeds, that would tell me that the problem is specific to the laptop (either HW or SW). But when I restarted my Netgear router, the laptop wifi problems disappeared. That just doesn't make sense. If we know that one device can connect properly to the router, why would a laptop have problems? What are some possible reasons why this might happen? Also, during my problem state, I noticed that on my laptop upload speeds were faster than my download speeds. Anybody have a guess about what might cause upload speeds on one device to be faster than another? Is there any actions i could take (or options to enable) so this problem won't occur. (I initially thought my problem might be software related or memory related -- Norton AV or browser plugins. But even after I disabled everything and made sure memory footprint was minimal, the slowdown was still occurring -- and it solved itself altogether when the router was reset).

    Read the article

  • Mysql server high trafic makes websites really slow or unable to load

    - by Holapress
    Lately we have been having a lot of problems with our mysql server, from websites being really slow or even unable to load them at all. The server is a dedicated server that only runs our mysql database. i have been running some test using a profiler (JetProfiler) and tool to stress test (loadUI). If I use loadUI to connect with 50 simultaneous connections to one of our websites that runs a resently big query it will already make the website be unable to load. One of the things that makes me worried is that when I look at Jetprofile it always shows a Treads_connected of 1.00 and it seems that when it hits around 2.00 that I'm unable to connect. The 3 big peaks are when I run a test with loadUI, first one was 15 simultaneous connections wich made it still able for me to load the website but just really slow, the second one was 40 simultaneous connections which already made it impossible to load and the third one was with 100 connection which also didn't make it load anymore. Another thing that worries me is that in JetProfiler it says all the queries that get used are full table scans, could this maybe be the problem? The website I run as a test runs 3 queries, one for a menu that outputs around 1000 rows, one for the adds that has around 560 rows and a big one to get posts that has around 7000 rows (see screenshot bellow) I also have monitored the cpu of the server and there seems to be no problem there, even when I make a lot of connections with loadui the cpu stays low. I can't seem to figure out what is the main cause of the websites being unable to load when there is a high amount of traffic, if anyone has other suggestions for testing or something that might cause the problem please let me know.

    Read the article

< Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >