Search Results

Search found 20163 results on 807 pages for 'struct size'.

Page 531/807 | < Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >

  • Virtual Box - not filling entire screen

    - by jdavis
    I am new to VirtualBox and am trying to set up an instance of Windows 7 64. I have the virtual machine instance running with Windows 7 now installed, but it only fills up a small portion of my screen. Even when I go full screen, the window stays the same size and the rest of the screen is filled with gray space. I have installed VirtualBox Guest Additions, which allowed me to go from a resolution of 800x600 to 1024x768, but this still isn't satisfactory as my laptop display is 1600x900. Any help on this would be most appreciated. Thanks.

    Read the article

  • Recover files after unsuccessful partitioning

    - by arsan
    I wanted to install another Linux on my computer, so I tried to resize one of my NTFS partitions with Norton Partition Magic. It didn't complete successfully, showed some errors, said that the partition is not resized and that it's the same size like before. But when I rebooted my computer I couldn't open that partition anymore and I am also not able to mount it from Linux. So this is my question: I had very important data on that partition - can I recover it? I guess nothing's deleted; it's just something messed up so it's not usable, but can I get it back? Please reply if there's any possible way of doing this, thank you.

    Read the article

  • Photoshop How to save selection to PNG

    - by Aniti
    I have a largish PSD file with a couple of hundred layers, that I would like to extract selected areas from into PNG files. Areas can consist of a couple of layers. Being new to Photoshop, I have been using the following workaround. Duplicate needed layers into a new scratch PSD file of same size, TRIM to transparency, Save As PNG, undo TRIM, hide layers, rinse and repeat... I suppose I could do it without the scratch file and just crop selection, Save As PNG and undo, but there must be a nicer method. What other ways are there to accomplish this export of a selected area to PNG? EDIT: This is on Windows Xp running Photoshop CS3 Extended

    Read the article

  • Recurring Apache 2.0.52 error on CentOS 4 - 'could not create `rewrite_log_lock`'

    - by warren
    I have been seeing a recurring issue on my web server: [Sun May 16 03:10:19 2010] [crit] (28)No space left on device: mod_rewrite: could not create rewrite_log_lock Configuration Failed [Sun May 16 04:10:05 2010] [crit] (28)No space left on device: mod_rewrite: could not create rewrite_log_lock Configuration Failed [Sun May 16 05:10:04 2010] [crit] (28)No space left on device: mod_rewrite: could not create rewrite_log_lock Configuration Failed [Sun May 16 05:17:13 2010] [crit] (28)No space left on device: mod_rewrite: could not create rewrite_log_lock Configuration Failed So far, the only fix I have found to this when it happens is to reboot my server. This is non-ideal :-\ Restarting httpd does not clear the error. df indicates I have 20+ gigs free, and top and free both report 800+ megs (or 1.2 gigs) > df -h Filesystem Size Used Avail Use% Mounted on /dev/simfs 40G 18G 23G 44% / # > free total used free shared buffers cached Mem: 1474560 300832 1173728 0 0 0 -/+ buffers/cache: 300832 1173728 Any ideas on why this would recur, and how to prevent/fix it?

    Read the article

  • SAMBA and Linux ACLs -- "Permission denied" on write to share but file written nevertheless

    - by MCH
    I set up a writable share directory "/home/net/share" with acl like this: sudo mkdir -p "/home/net/share" sudo setfacl -m "u:localuser:rwx,u:remoteuser:rwx,g:users:rwx" "/home/net/share" My /etc/samba/smb.conf looks like this: [global] workgroup = w server string = server security = user load printers = no log file = /var/log/samba/%m.log max log size = 50 dns proxy = no printing = bsd printcap name = /dev/null disable spoolss = yes encrypt passwords = true invalid users = nobody root follow symlinks = yes wide links = yes [share] comment = Writable by localuser and remoteuser path = /home/net/share valid users = remoteuser read only = no public = no printable = no Locally, localuser and remoteuser have user accounts and smbpasswds and can both read, create and delete files in /home/net/share. But when I log on from a different machine (like this: sudo mount -t cifs //server/share mountpoint/ -o username=remoteuser ), I get "Permission denied" both when trying to create directories and files, oddly though, it does create files (not directories!) despite these messages! How can I get this working?

    Read the article

  • Resized NTFS partition, now it wont mount.

    - by H4Z3Y
    I have had a 1.5TB drive used as an external for 6 months or so, then I decided to put it in my linux server for network storage. ntfs was being crazy inefficient so I wanted to change the filesystem to ext4. I used the ntfsresize command to reduce the partition to 650GB and that took abour 2 hours, then I deleted all of the entries in fstab like a guide told me too and created a new one the size of the ntfs partition, or, 650GB. after I modified fstab the ntfs partition would no longer mount and when plugging it in to windows it says "This Hard Drive needs to be formatted". any ideas on how I can recover the data off of the drive? I have 600GB of free space on a different drive so I just need some way of copying them off.

    Read the article

  • Is ext4 more expensive than ntfs?

    - by ???
    I have just converted an NTFS partition to ext4, however the total space seems reduced from 421G to 415G. Where did the 6G go? And, the reserved space is grown to 199M in ext4, much larger compared to 78M in NTFS, why? The partition is mainly used for movies/musics, so most files are very large (10M each). I want to use ext4 file system, is there any suggestion? mkfs.ntfs: /dev/sdb4 421G 78M 421G 1% /mnt/mmedia mkfs.ext4: /dev/sdb4 415G 199M 393G 1% /mnt/mmedia It's also weired that the remaining size of ext4 is 393G, shouldn't it be 415G or 414G? What happened to the disappeared 22G? Compared to NTFS, ext4 seems eaten 28G in total.

    Read the article

  • How do I recover a RAID 1 volume on Mac OS X (10.7)?

    - by Avry
    I have a Synology NAS that I've set up with RAID 1. The device is set up with two drives, both the same size (i.e. 500 GB each), formatted in ext3, as a RAID 1 volume (i.e. even though the total capacity is 1TB, I effectively only get 500 GB). In the case of a device failure where I can only access one of the drives, how can I recover my data? The solution I'm looking for is something like: 'Put the working drive in an enclosure, and use <some software> to recover your data.'

    Read the article

  • DD-WRT: What firmware and what webserver will fit on my 8MB of flash?

    - by Jeshii
    Attempting to make a portable WiFi webserver with php support on an old WRT54GS (v1.0) with DD-WRT. I have 8MB of flash on there. I know, it's a tall order. I tried the combination of dd-wrt.v24-13064_VINT_openvpn_jffs_small.bin, optware, and lighttpd. Didn't have enough space. Now I'm going to try dd-wrt.v24-13064_VINT_mini.bin, but I'm only saving 300KB, and I don't think that is going to make the difference. Any other small http servers with php support? Heck, I didn't even got to the point where I could add php! Maybe a way to calculate the size and dependencies of packages from optware BEFORE trying to install is more what I'm looking for. Any ideas?

    Read the article

  • what free app can i use to resize an image to specific heigh x width?

    - by kacalapy
    I have an image that is huge and i want to re-size it down to 375 x 210 px to enter a competition. I would like to keep the aspect ratio of the image the same and not have to crop if I dont have to. i dont think my original image is the same ratio or proportions so i understand ill need to crop to get that in line but the rest should just be shrunk. that is to say i dont want to have to crop a small part of the image to meet the 375 x 210 px requirement.

    Read the article

  • Rendering multiple squares fast?

    - by Sam
    so I'm doing my first steps with openGL development on android and I'm kinda stuck at some serious performance issues... What I'm trying to do is render a whole grid of single colored squares on to the screen and I'm getting framerates of ~7FPS. The squares are 9px in size right now with one pixel border in between, so I get a few thousand of them. I have a class "Square" and the Renderer iterates over all Squares every frame and calls the draw() method of each (just the iteration is fast enough, with no openGL code the whole thing runs smootlhy at 60FPS). Right now the draw() method looks like this: // Prepare the square coordinate data GLES20.glVertexAttribPointer(mPositionHandle, COORDS_PER_VERTEX, GLES20.GL_FLOAT, false, vertexStride, vertexBuffer); // Set color for drawing the square GLES20.glUniform4fv(mColorHandle, 1, color, 0); // Draw the square GLES20.glDrawElements(GLES20.GL_TRIANGLES, drawOrder.length, GLES20.GL_UNSIGNED_SHORT, drawListBuffer); So its actually only 3 openGL calls. Everything else (loading shaders, filling buffers, getting appropriate handles, etc.) is done in the Constructor and things like the Program and the handles are also static attributes. What am I missing here, why is it rendering so slow? I've also tried loading the buffer data into VBOs, but this is actually slower... Maybe I did something wrong though. Any help greatly appreciated! :)

    Read the article

  • How do I make an encrypted disk image on Debian?

    - by Blacklight Shining
    I'm basically looking for an equivalent to OS X's encrypted sparsebundles. The solution should have support for file ACLs and should not force me to specify a size in the beginning (the image should only take up as much space as it needs) or require root access to mount and unmount. Ideally, I should be able to set two different passwords (both for the same data), but that's not too important. (I do have root access to the machine and so can install packages and such, but I would rather not have to sudo just to mount an image.)

    Read the article

  • using "touch" to create directories?

    - by user66732
    1) in the "A" directory: find . -type f a.txt 2) in the "B" directory: cat a.txt | while read FILENAMES; do touch "$FILENAMES"; done 3) Result: the 2) "creates the files" [i mean only with the same filename, but with 0 Byte size] ok. But if there are subdirs in the "A" directory, then the 2) can't create the files in the subdir, because there are no directories in it. Question: is there a way, that "touch" can create directories?

    Read the article

  • using "touch" to create directories?

    - by user62367
    1) in the "A" directory: find . -type f a.txt 2) in the "B" directory: cat a.txt | while read FILENAMES; do touch "$FILENAMES"; done 3) Result: the 2) "creates the files" [i mean only with the same filename, but with 0 Byte size] ok. But if there are subdirs in the "A" directory, then the 2) can't create the files in the subdir, because there are no directories in it. Question: is there a way, that "touch" can create directories?

    Read the article

  • Upload large database SQL file

    - by Devy
    I've a database of more than 20Gb of size on my hard disk. What is the best way to upload it with the least (money) load possible on the server? - I'm on Windows 7. - I have FTP and SSH access on the server. I avoid using FTP because my connection cuts off a lot, I can't imagine I re-upload again the file after failing on 99%. I found some tools that split the large .sql file to small .sql files, but they didn't mention how to gather these files again into one file. Another way is to archive the big .sql file to .rar with -v option, upload them through FTP then unpack them. But unpacking will also cost, right? I know it will cost in any cases, but any best practice will be strongly appreciated.

    Read the article

  • Why are some UDP packets getting blocked?

    - by Tom
    In our organization, we have two test machines running Windows XP. While attempting to test a roll-my-own UDP message server, I found that both could receive small messages (under 2k) just fine. However, when I test sending large packets to both of these machines, one receives them fine, while the other can't receive them at all. Both machines have SP3 and both have their Windows Firewall shut off, but one still isn't working. Can anyone tell me where to look for anything that might be blocking or limiting the packet size on a Windows Machine? Thanks.

    Read the article

  • Packaging MATLAB (or, more generally, a large binary, proprietary piece of software)

    - by nfirvine
    I'm trying to package MATLAB for internal distribution, but this could apply to any piece of software with the same architecture. In fact, I'm packaging multiple releases of MATLAB to be installed concurrently. Key things Very large installation size (~4 GB) Composed of a core, and several plugins (toolboxes) Initially, I created a single "source" package (matlab2011b) that builds several .debs (mainly matlab2011b-core and matlab2011b-toolbox-* for each toolbox). The control file is just the standard all: dh $@ There is no Makefile; only copying files. I use a number of debian/*.install files to specify files to copy from a copy of an installation to /usr/lib/. The problem is, every time I build the thing (say, to make a correction to the core package), it recopies every file listed in the *.install file to e.g debian/$packagename/usr/ (the build phase), and then has to bundle that into a .deb file. It takes a long time, on the order of hours, and is doing a lot of extra work. So my questions are: Can you make dh_install do a hardlink copy (like cp -l) to save time? (AFAICT from the man page, no.) Maybe I should just get it to do this in the Makefile? (That's gonna b e big Makefile.) Can you make debuild only rebuild .debs that need rebuilding? Or specify which .debs to rebuild? Is my approach completely stupid? Should I break each of the toolboxes into its own source package too? (I'll have to do some silly templating or something, because there's hundreds of them. :/)

    Read the article

  • Is it dangerous to add/remove a hard-drive to a Windows machine which is in stand by?

    - by Adal
    Can I add a SATA drive to a Windows 7 machine which is in standby mode? The hardware supports hot-plug. Could pulling the drive out while in standby corrupt the data on the drive (unflushed caches, ...)? Does Windows flush before standing by? How about swapping a drive with another drive of different kind (SSD - mechanical disk) and size, also while in stand-by. Could the OS when waking up believe that the old drive is still there, and write to it and thus corrupt it, since the new one has different partitions and data?

    Read the article

  • Another hibernation question

    - by GeekOfTheWeek
    I installed Ubuntu on my Windows 7 Sager laptop using Wubi. Hibernate (i.e. suspend to disc) is not an option from the power icon, only suspend, shutdown, etc. Hibernate is also not an option from my battery/lid close options. I understand that hibernation is disabled by default in Ubuntu 12.04. I tried running pm-hibernate but I get the following message: Looking for splash system... none s2disk: Snapshotting system and then the computer just hangs with a black screen. According to the documentation here if this fails then I can't enable hibernate but it offers no help in making pm-hibernate succeed. Could swap be my problem? It looks like I have very small swap: user@ubuntu:~$ cat /proc/swaps Filename Type Size Used Priority /host/ubuntu/disks/swap.disk file 262140 0 -1 The advice on SwapFaq is only for the author's set up (e.g. I don't have an Ubuntu install disk since I used Wubi) and he says that 'INFO: This will not work for 12.04, resume from hibernate work differently in 12.04.' Any advice? I really need to get hibernate working to use my laptop as a, er, laptop. Thanks

    Read the article

  • Backup tape compression

    - by pufferfish
    What things should I check to confirm that compression is actually happening on our tape backup system? Although the tapes are marked as 200G/520G (native/compressed) capacity, they seem to fill up before the 200G mark (some less than 100G). I'm using - Sony AIT-4 tape autochanger - Sony SDX4-200C (AIT-4) tapes - Ubuntu Lucid - Bacula I've tried checking hardware compression with: tapeinfo -f /dev/nst0, which gives Product Type: Tape Drive Vendor ID: 'SONY ' Product ID: 'SDX-900V ' Revision: '0102' Attached Changer API: No SerialNumber: '0001000036' MinBlock: 2 MaxBlock: 8388608 SCSI ID: 1 SCSI LUN: 0 Ready: yes BufferedMode: yes Medium Type: Not Loaded Density Code: 0x33 BlockSize: 0 DataCompEnabled: yes DataCompCapable: yes DataDeCompEnabled: yes CompType: 0x3 DeCompType: 0x3 BOP: yes Block Position: 0 Partition 0 Remaining Kbytes: 201778000 Partition 0 Size in Kbytes: 201779000 ActivePartition: 0 EarlyWarningSize: 0 NumPartitions: 0 MaxPartitions: 0 ... so I presume it's on. Notes: The Bacula documentation says hardware compression needs to be enable with "system tools such as mt"

    Read the article

  • Amazon Careers website - are resumes processed in plain text format only?

    - by sapphiremirage
    The submission site has the following options: "Please upload your resume (Word Document, max size: 512 KB) OR Please copy and paste the text version of your file here", with a text box below the latter option. I went ahead and uploaded my shiny LaTeX resume (as a PDF), despite the fact that they seem to want a Word Document, and there didn't seem to be any issues. However, when I went back to edit my profile, there was no evidence that my PDF had been uploaded, other than a text version of my resume, awfully formatted and clearly stripped from the PDF, sitting in the text box below "Please copy and paste the text version of your file here". Exasperated, I did a quick and dirty copy of the text from my resume into a Word doc and uploaded that. Same result: no evidence of a file uploaded, just a stripped text version in the textbox. What I'm wondering now is, are they only going to look at the text version of my resume? If that's the case then I'm obviously going to edit it so that it looks halfway decent and doesn't contain such atrocities from the conversion as "Other Skills: LTEX". I can pretty little text files without too much effort, so this isn't that big of deal. However, my LaTeX resume is going to look better than anything I can do in plain text, so if the site is actually keeping a copy of that, then I certainly don't want to override it. Has anyone here either gone through the Amazon hiring process or interviewed candidates and know how this works? (i.e. When on site with Amazon, did the interviewers have diversely formatted resumes, or did they all look suspiciously similar)

    Read the article

  • postfix smtps issue

    - by DavidC
    Im currently experiencing the following issue with postfix over ssl (smtps) Apr 7 13:43:55 server88-208-248-147 postfix/smtpd[5777]: connect from xxxxxxxxxxxxxxx[xxx.xxx.xxx.xxx] Apr 7 13:45:09 server88-208-248-147 postfix/smtpd[5777]: lost connection after UNKNOWN from xxxxxxxxxxxxxxx[xxx.xxx.xxx.xxx] Apr 7 13:45:09 server88-208-248-147 postfix/smtpd[5777]: disconnect from xxxxxxxxxxxxxxx[xxx.xxx.xxx.xxx] my main.cf is as follows: smtpd_tls_cert_file = /etc/postfix/smtpd.cert smtpd_tls_key_file = /etc/postfix/smtpd.key smtpd_use_tls = yes smtp_use_tls = yes smtpd_tls_auth_only = no smtpd_tls_CAfile = /etc/postfix/caroot.crt smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_tls_loglevel = 1 when accessing smtp and running start tls i get the following: # telnet xxxxxxxxxxxxxxx 25 Trying xxxxxxxxxxxxxxx... Connected to xxxxxxxxxxxxxxx . Escape character is '^]'. 220 xxxxxxxxxxxxxxx ESMTP Postfix ehlo localhost 250-xxxxxxxxxxxxxxx 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-AUTH PLAIN LOGIN 250-AUTH=PLAIN LOGIN 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN STARTTLS 220 2.0.0 Ready to start TLS please help as i'm lost of places to look now. os is Ubuntu 10.4 and the SSL is a wildcard SSL, imap/pop and apache work flawlessly with the same certificate.

    Read the article

  • video editing tool to color overlay a specific part

    - by Santosh
    I have downloaded a video from YouTube. But the uploader has put some link (their twitter and facebook) for promotional purpose. The links keeps coming up through out the video in the black area (up and down of the video which are black). Thank god that links are on the black part of the video otherwise it would be hard to remove.    Also I want to remove the the last few seconds of the video. I don't want to crop that part. The video is in MP4 format. I don't want to lose the quality in anyway, I won't mind if the file size increases. I want a opensource and free tool. Good if it is available to both Windows and Ubuntu. Here is a link to the video.

    Read the article

  • Complex shading using one single (small) texture

    - by teodron
    Recently I stumbled upon a demo reel in UDK about how one can attain beautiful results using just one (rather tiny) texture that's being sent to the shader pipeline. The famous link is this one. Basically, the author states that they've used just one texture and give a snapshot of the technique here. I see that every RGBA channel contains different grayscale information.. and that info could be used to inside a shader to obtain a colour blended output. The problem is that the reel displays a fairly complex scene. To top that, the author even makes use of a normal map. How did they manage to fit a normal map in an already cluttered texture? It makes sense to have a half-space normal map by using only RG from an RGB texture, but what about the rest of the information? Since it was proven to be possible, could someone please explain how it was done (the big picture, not the dirty details!)!? Here's the texture being used. Click to see in full size.

    Read the article

  • Difference between key_buffers and recommendation

    - by Typeoneerror
    I'm looking to add a bit of memory to MySQL on a Linode VPS server on which I've got a small facebook (canvas app) PHP app using MySQL running. I'm not super familiar with MySQL optimization so I'm hoping to find a simple answer. I think I want to increase the key_buffer size (the default is 16M) to something like 32M to start, but I'm not sure if I need to tweak anything else as well. All I've done so far is increase the query_cache_size to 32M from 16M. There's also key_buffer under [mysqld] and key_buffer under [isamchk]. What are the difference between those two? If I have Linode 2048MB (http://www.linode.com) VPS, what would recommend I set the buffers to? I don't expect this site to have tons of visitors, but I'd like it to be as optimized as possible. Definitely way more heavy on the database access than PHP and very few HTTP requests.

    Read the article

< Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >