Search Results

Search found 20163 results on 807 pages for 'struct size'.

Page 531/807 | < Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >

  • Unable to get to remote samba share

    - by tubaguy50035
    I have a remote VPS that I would like to setup samba on and only allow my IP access to it. I currently have in my smb.conf: [global] netbios name = apollo security = user encrypt passwords = true socket options = TCP_NODELAY printing = bsd log level = 3 log file = /var/log/samba/log/%m debug timestamp = yes max log size = 100 [hosting] path = /hosting/ comment = Hosting Folder browseable = yes read only = yes guest account = yes valid users = nick I have the ports (137,138,139,445) open in iptables (they're open to everyone right now while I debug) and I see nothing in the syslog about iptables blocking my requests. When I try to open a file browser to my address \\ipaddress, it hangs for a good thirty seconds, and then opens a log in box. I enter my user name and password for the server, hit okay. It then opens the same box, I enter my credentials again and hit enter. Windows then tells me it could not connect. My user account is added to Samba already. Anybody have any suggestions what I can do to get this working?

    Read the article

  • Post raid5 setup reboot shows single hard drive failure on ubuntu 12.10?

    - by junkie
    I just set up raid 5 on linux using three HDDs as per a guide. It all went fine until when I rebooted I got the following text: http://i.stack.imgur.com/Zsfjk.jpg. Does this mean one of my HDDs has failed? How do I check if any of them are failing? I tried using smartctl and didn't see any issues. Or is it nothing to do with failure and something else altogether? I would like to get the raid 5 working again but I'm not sure where to go from here. I'm using ubuntu 12.10 and the three raid disks each have a gpt partition with a single full size partition of filesystem type ext4. Note I only got an error on reboot not while I was creating the raid array which went fine. Thanks.

    Read the article

  • Problem in multi booting Ubuntu 12.04 with existing Windows XP, 7 and 8 in 500GB HDD with 5 Partitions

    - by Dhruva
    Here's my case. I have 500GB HDD with 5 Partitions with XP, Windows7 and Windows8 RP in the first three. As per one of the instruction I've seen in this forum, I did shrink my 4th Partition to create a 30GB unallocated free space to install Ubuntu 12.04. But, when next I'm trying to boot the Ubuntu CD and choosing "Something Else", its only recognizing my 500GB HDD in whole as "/sda" and not reading the free 30GB space separately to install Ubuntu in it as suggested in the instruction mentioned in this forum. I've also tried to install in from within Windows7, by mounting the Ubuntu ISO file and using the .exe file and instruction thereupon (choosing free drive, user name, installation size, etc.), but that also failed after the PC restarted to continue the installation, showing as error for file extension, partition something error. One thing to be noted that the PC I'm trying to install Ubuntu in it is my Home PC and doesn't have any internet connection. Hence, no updates or otherwise online help. What shall I do?? Kindly suggest. Sorry if I made some grammatical mistakes as English is not my first language. Thanks in advance.

    Read the article

  • Install Windows7 on drive with Ubuntu 12.04 already on. Is my plan good?

    - by John F
    I have Ubuntu 12.04 working fine, but need W7 occasionally. I just wanted to check that my plan for installing would work? Any help appreciated. Current partitions are: Partition....@ File System @ Mount Point @ Size.....@ Used.....@ Flags /dev/sda1....@ ext4........@ /ext4a......@ 37 GiB...@ 776 MiB..@ boot /dev/sda2....@ extended....@.............@ 122 GiB..@ -........@ ./dev/sda5...@ ext4........@ / ..........@ 37 GiB...@ 6 GiB....@ .unallocated @ unallocated @.............@ 7 GiB....@ - ...... @ ./dev/sda6 ..@ ext4........@ /home.......@ 77 GiB...@ 32 GiB...@ .unallocated @ unallocated @.............@ 65 GiB...@ - .......@ /dev/sda3...@ linux-swap..@.............@ 7 GiB....@ - .......@ My plan is to: - boot to ubuntu from USB ISO - change sda1 to NTFS - install W7 to sda1 - use the "Master Boot Record repair" utility to configure dual boot so I can see my original ubuntu installation as well as W7. Have I missed something? I'm concerned as to what the 776MB is that will be overwritten by the change to NTFS. It seems large for just the MBR? Would also appreciate it if anyone can explain what sda5 and 6 are being used for? Is sda5 Ubuntu and sda6 my data? Thanks in advance.

    Read the article

  • Pipe an infinite stream to internal loop?

    - by Sh3ljohn
    I've seen a lot of things about redirecting stdout to a TCP socket, but no real example of how to do it in practice, specifically when the output stream generated by the first "command" never ends. To talk about something concrete, let's take programs like servers that typically output their log endlessly to stdout (well, as long as they run). If you redirect the output to a log file on the disk, then this file is always open (therefore not readable by others?) and grows infinitely, which eventually is going to cause problems. This might be a nood question, but I don't know what it does or how to do it so. How to redirect the output of a command to the internal loop? I want to make sure that data is sent EVERY time something is written to stdout, and that the pipe won't wait for the command to end (never happens ideally!). Is that right? If 2 is true, is there a buffer system to send chunks of data once it reaches a certain size only? Could you give me concrete command line examples to do the above? Thanks in advance

    Read the article

  • Saving 16:9 video in Movie Maker without black border

    - by Tschareck
    I'm editing my video in Windows Live Movie Maker from Live Essentials 2011. My source video is from camera and is .mp4 format with size of 1280 x 720. After editing in Movie Maker, I save the movie. And no matter what option I chose, I always end up with .wmv file, that is either 4:3 image with black stripes above and below the video, or 16:9 with black frame all around the image. What settings should I use, to be able to export or save the video in 1280 x 720 without any black border?

    Read the article

  • nginx static file buffer

    - by Philip
    I have a nfs which several frontend-servers are connected to for making the files stored on the nfs available for http downloads. It looks like I have problems with the way apache is serving the files, there seems to be a very small buffer or no buffer at all which results in a lot disk seeks. I did some testing with loading the whole requested file into memory at once and serve it to the client from memory. With this technique I need less disk seeks for a download stream. Since I don't want to implement this by myself for production use I thought that I could maybe use nginx for that because the documentation says that it uses buffers for static file serving. Is it possible to increase the buffer size to a few mb, if so which config parameter do I have to change for this? Has anyone experience with large buffers for static file serving? Is there a better way to reduce disk seeks?

    Read the article

  • df says disk is full, but it is not

    - by Chris
    On a virtualized server running Ubuntu 10.04, df reports the following: # df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 7.4G 7.0G 0 100% / none 498M 160K 498M 1% /dev none 500M 0 500M 0% /dev/shm none 500M 92K 500M 1% /var/run none 500M 0 500M 0% /var/lock none 500M 0 500M 0% /lib/init/rw /dev/sda3 917G 305G 566G 36% /home This is puzzling me for two reasons: 1.) df says that /dev/sda1, mounted at /, has a 7.4 gigabyte capacity, of which only 7.0 gigabytes are in use, yet it reports / being 100 percent full; and 2.) I can create files on / so it clearly does have space left. Possibly relevant is that the directory /www is a symbolic link to /home/www, which is on a different partition (/dev/sda3, mounted at /home). Can anyone offer suggestions on what might be going on here? The server appears to be working without issue, but I want to make sure there's not a problem with the partition table, file systems or something else which might result in implosion (or explosion) later.

    Read the article

  • Can't get Unity 3D to work in 11.10

    - by pmoseph
    I recently upgraded to 11.10 on my Lenovo ThinkPad T520, and I'm not able to load Unity 3D (I'm not selecting 2D at login menu either). me@mycomp:~$ echo $DESKTOP_SESSION ubuntu-2d I ran the unity support test below as well. me@mycomp:~$ /usr/lib/nux/unity_support_test -p Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Error: unable to create the OpenGL context And it looks like I only have one graphics card: me@mycomp:~$ lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) Also, Ubuntu lists nothing under the "Additional Drivers" window. Any help would be extremely appreciated as I'm somewhat of a noob. Thanks! Edit 1: Here is the output of lshw -C display me@mycomp:~$ sudo lshw -C display *-display description: VGA compatible controller product: 2nd Generation Core Processor Family Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 09 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:43 memory:f0000000-f03fffff memory:e0000000-efffffff ioport:5000(size=64)

    Read the article

  • Upload large database SQL file

    - by Devy
    I've a database of more than 20Gb of size on my hard disk. What is the best way to upload it with the least (money) load possible on the server? - I'm on Windows 7. - I have FTP and SSH access on the server. I avoid using FTP because my connection cuts off a lot, I can't imagine I re-upload again the file after failing on 99%. I found some tools that split the large .sql file to small .sql files, but they didn't mention how to gather these files again into one file. Another way is to archive the big .sql file to .rar with -v option, upload them through FTP then unpack them. But unpacking will also cost, right? I know it will cost in any cases, but any best practice will be strongly appreciated.

    Read the article

  • Nginx, logrotate and empty files

    - by tzulberti
    I have a problem with nginx/logrotate. The problems is that nginx is logging access to 2 files (main and data). I have the following contrab setting: 0 * * * * /usr/sbin/logrotate -f /home/orwell/orwell-setup/bin/logrotate-nginx And the file "logrotate-nginx" has the following content: /tmp/data.log { rotate 90 daily missingok notifempty size 1 sharedscripts postrotate [ ! -f /tmp/nginx.pid ] || kill -USR1 `cat /tmp/nginx.pid` MORE THINGS endscript } /tmp/main.log { rotate 90 daily missingok notifempty size 1 sharedscripts postrotate [ ! -f /tmp/nginx.pid ] || kill -USR1 `cat /tmp/nginx.pid` MORE THINGS endscript } The work is done in the two files, but there is a problem that nginx stops logging into those files. Both files are created, but they are empty. Any ideas why nginx stop logging info to both files?

    Read the article

  • file copy error from system to cifs mount

    - by dwpriest
    When coping a file greater than 64kB from an Ubuntu server to a CIFS mounted windows share, most of the data is copied, but it seems the last chunk doesn't get copied. The size doesn't match, and the md5 check sums don't match. I have plenty of file space, but then I use cp, I get the following... cp: closing `cloudBackup/asdf.txt': No space left on device Using rsync, I get the following... rsync: close failed on "/home/fluffy/cloudBackup/.asdf.txt.qrBWe6": No space left on device (28) rsync error: error in file IO (code 11) at receiver.c(752) [receiver=3.0.8] rsync: connection unexpectedly closed (29 bytes received so far) [sender] rsync error: error in rsync protocol data stream (code 12) at io.c(601) [sender=3.0.8] I have full read/write permissions on the mounted share. I can copy via SSH just fine. Any ideas? Thank you

    Read the article

  • MongoDB: Replicate data in documents vs. “join”

    - by JavierCane
    Disclaimer: This is a question derived from this one. What do you think about the following example of use case? I have a table containing orders. These orders has a lot of related information needed by my current queries (think about the products; the buyer information; the region, country and state of the sale point; and so on) In order to think with a de-normalized approach, I don't have to put identifiers of these related items in my main orders collection. Instead, I have to repeat all the information for each order (ie: I will repeat the buyer's name, surname, etc. for each of its orders). Assuming the previous premise, I'm committing to maintain all the data related to an order without a lot of updates (because if I modify the buyer's name, I'll have to iterate through all orders updating the ones made by the same buyer, and as MongoDB blocks at a document level on updates, I would be blocking the entire order at the update moment). I'll have to replicate all the products' related data? (ie: category, maker and optional attributes like color, size…) What if a new feature is requested and I've to make a lot of queries with the products "as the entry point of the query"? (ie: reports showing the products' sales performance grouping by region, country, or whatever) Is it fair enough to apply the $unwind operation to my orders original collection? (What about the performance?) I should have to do another collection with these queries in mind and replicate again all the products' information (and their orders)? Wouldn't be better to store a product_id in the original orders collection in order to be more tolerable to requirements change? (What about emulating JOINs?) The optimal approach would be a mixed solution with a RDBMS system like MySQL in order to retrieve the complete data? I mean: store products, users, and location identifiers in the orders collection and have queries in MySQL like getAllUsersDataByIds in which I would perform a SELECT * FROM users WHERE user_id IN ( :identifiers_retrieved_from_the_mongodb_query )

    Read the article

  • Who should have full visibility of all (non-data) requirements information?

    - by ebyrob
    I work at a smallish mid-size company where requirements are sometimes nothing more than an email or brief meeting with a subject matter manager requiring some new feature. Should a programmer working on a feature reasonably expect to have access to such "request emails" and other requirements information? Is it more appropriate for a "program manager" (PGM) to rewrite all requirements before sharing with programmers? The company is not technology-centric and has between 50 and 250 employees. (fewer than 10 programmers in sum) Our project management "software" consists of a "TODO.txt" checked into source control in "/doc/". Note: This is nothing to do with "sensitive data access". Unless a particular subject matter manager's style of email correspondence is top secret. Given the suggested duplicate, perhaps this could be a turf war, as the PGM would like to specify HOW. Whereas WHY is absent and WHAT is muddled by the time it gets through to the programmer(s)... Basically. Should specification be transparent to programmers? Perhaps a history of requirements might exist. Shouldn't a programmer be able to see that history of reqs if/when they can tell something is hinky in the spec? This isn't a question about organizing requirements. It is a question about WHO should have full VISIBILITY of requirements. I'd propose it should be ALL STAKEHOLDERS. Please point out where I'm wrong here.

    Read the article

  • DD-WRT: What firmware and what webserver will fit on my 8MB of flash?

    - by Jeshii
    Attempting to make a portable WiFi webserver with php support on an old WRT54GS (v1.0) with DD-WRT. I have 8MB of flash on there. I know, it's a tall order. I tried the combination of dd-wrt.v24-13064_VINT_openvpn_jffs_small.bin, optware, and lighttpd. Didn't have enough space. Now I'm going to try dd-wrt.v24-13064_VINT_mini.bin, but I'm only saving 300KB, and I don't think that is going to make the difference. Any other small http servers with php support? Heck, I didn't even got to the point where I could add php! Maybe a way to calculate the size and dependencies of packages from optware BEFORE trying to install is more what I'm looking for. Any ideas?

    Read the article

  • How to update OpenSSL using Putty and yum command

    - by JM4
    I am so new to updating server technologies it is unbelievable but we are trying to become PCI Compliant and have to update some of our server technologies. One in particular is OpenSSL. We are currently running arch i686 0.9.8e but we have to upgrade to ATLEAST 0.9.8g. When I run a yum update command, there are no updates available. If I run "yum info openssl" it says available packages are: arch i386 0.9.8e but the only difference is smaller file size. I am running the following repositories: Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * addons: mirrors.netdna.com * atomic: www6.atomicorp.com * base: mirrors.igsobe.com * extras: mirror.vcu.edu * updates: mirror.vcu.edu any help out there?

    Read the article

  • Smarter Search Results in NetBeans IDE 7.2

    - by Geertjan
    After you search your code using NetBeans IDE (using Ctrl-F for "Find" or Ctrl-H for "Replace"), you see the Search Results window, which looks like this: At least, the above is how it looks in NetBeans IDE 7.2. Before that, you didn't have all those extra columns (which can be displayed in the Search Results window after clicking the small button top right in the view) and you also didn't have the quick search (which is invoked by typing directly into the Search Results window), as can be seen here: So, the Search Results window now provides a lot more info than before. Being able to know the path to a file I've found, as well as the last modification date, file size, and the number of matches within the file, is useful at the end of a search process. In the NetBeans IDE 7.2 New & Noteworthy, the above changes are described in the Utilities section, as well as in the Quick Search in OutlineView section, where you can read that these are generic solutions that can be used in your own OutlineViews. Other OutlineViews in NetBeans IDE 7.2, such as the Debugger window, now also have these new features. A related article worth reading is Beefed Up Code Navigation Tools in NetBeans IDE 7.2. 

    Read the article

  • Problem with fitting graphics card

    - by Gary Robinson
    I have recently purchased a new radeon hd5670 graphics card. The problem I have is that it is what I might call a double height card. So the end where you fix the card to the case is twice the size of the card it is replacing. But this means that it is blocked by the rear of the network card. The only way I can see it to cut that part of the fixture part out then I won't have a problem. But has anyone any suggestions what I could use? To be clear it is not the actual card it is the fixture part which is 90 degress to the card that is causing the problem. Unless someone knows of another solution - some kind of pci-e extender that I could use as I could easily

    Read the article

  • Should I format USB sticks and SD cards to FAT, FAT32, exFAT or NTFS? (Windows files, live Linux distors)

    - by superuser
    Does it depend on the media size which one to chose or on some other parameters? In Windows 7 FAT16 is the default. In pendrivelinux.com's Universal USB Installer FAT32. Which one to chose? How about NTFS for Windows use? How about exFAT? It is tne Microsoft designed filesystem for removable media. Is there a difference in USB sticks and SD cards in this regard? Edit: seeing developments in the other thread, should I still use something like exFAT if I don't want Recycle bins created on every single machine I plug my USB thumb drive in?

    Read the article

  • Is there a program that will show a tree of the differences in two file trees?

    - by Huckle
    In windows I manually back up from time to time by formatting my external drive and copying the contents of my data partition over. Inevitably there is a difference in the number and size of the files copied because of system files, etc. Is there a program that would diff two directories recursively and compile the differences into a nice GUI tree that I could peruse (preferably filter) to ensure that everything I want made it over to the drive? It should only show files that are not in both directories. (Also, please ignore the inadequacy of my backup solution)

    Read the article

  • On Windows 7, how can I tell if a recording is multi-channel without third party tools?

    - by engineerchuan
    A customer has an audio that is confidential and can't send it to me. He also would not like to install other tools. He has a basic Windows 7 install. Is there any way to tell whether the recording is one channel or two channel? Normally, I would just get the audio and soxi it. Or, I would tell him to install Audacity or equivalent sound editor and open it up. I also thought that if you right clicked and looked at the size, bit rate, and length, you could get number of channels but bit rate already factors in number of channels. Sorry I'm not giving you a lot to work with.

    Read the article

  • Is ext4 more expensive than ntfs?

    - by ???
    I have just converted an NTFS partition to ext4, however the total space seems reduced from 421G to 415G. Where did the 6G go? And, the reserved space is grown to 199M in ext4, much larger compared to 78M in NTFS, why? The partition is mainly used for movies/musics, so most files are very large (10M each). I want to use ext4 file system, is there any suggestion? mkfs.ntfs: /dev/sdb4 421G 78M 421G 1% /mnt/mmedia mkfs.ext4: /dev/sdb4 415G 199M 393G 1% /mnt/mmedia It's also weired that the remaining size of ext4 is 393G, shouldn't it be 415G or 414G? What happened to the disappeared 22G? Compared to NTFS, ext4 seems eaten 28G in total.

    Read the article

  • How soon does nginx's token bucket replenish when limiting at requests per minute?

    - by Michael Gorsuch
    We've decided that we want to experiment and limit requests per minute instead of requests per second on our sites. However, I am confused by the burst parameter in this context. I am under the impression that when you use the 'nodelay' flag, the rate limiting facility acts like a token bucket instead of a leaky bucket. That being the case, the bucket size is equal to the burst parameter, and every time that you violate the policy (say 1 req/s), you have to put a token in the bucket. Once the bucket is full (being equal to the burst setting), you are given a 503 error page. I am also under the impression that once a violator stops going against the policy, a token is removed from the bucket at a rate of 1 token/s allowing him to regain access to the site. Assuming that I have the above correct, my question is what happens when I start regulating access per minute? If we chose 60 requests per minute, at what rate does the token bucket replenish?

    Read the article

  • Error while mounting home directory on different logical volume

    - by RCola
    I created RAID 5 form 3 hard drives. Formatted as ext4 this raid array. Created VG0 group and lv_home logical volume in LVM. Then I tried to mount default /home directory on lv_home, while trying to mount logical volume lv_home to folder containing user profiles /home, getting error: mount: wrong fs type, bad option, bad superblock on /dev/mapper/VG0-lv_home next is seems to be symbolic link: # file -s /dev/VG0/lv_home /dev/VG0/lv_home: symbolic link to `../mapper/VG0-lv_home' then # file -s /dev/mapper/VG0-lv_home /dev/mapper/VG0-lv_home: data and lvm> pvs PV VG Fmt Attr PSize PFree /dev/md0 VG0 lvm2 a- 2.02g 68.00m lvm> lvdisplay --- Logical volume --- LV Name /dev/VG0/lv_home VG Name VG0 LV UUID WzJus7-2yV8-yhog-Ju1b-TpWH-IIAI-LIutwe LV Write Access read/write LV Status available # open 0 LV Size 1.17 GiB Current LE 300 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 251:0

    Read the article

  • Why doesn't postfix use my smtp_generic_maps?

    - by RichardTheKiwi
    What have I set up incorrectly? >postconf -n .... smtp_generic_maps = regexp:/etc/postfix/rewrite .... >cat /etc/postfix/rewrite /.*/ [email protected] >echo "test" | mail -s "test" [email protected] >tail -f /var/log/mail.log Dec 8 05:56:01 xxxxxxxxxxxx postfix/pickup[20227]: E9272709284: uid=501 from=<yyyy> Dec 8 05:56:01 xxxxxxxxxxxx postfix/cleanup[20270]: E9272709284: message-id=<[email protected]> Dec 8 05:56:01 xxxxxxxxxxxx postfix/qmgr[20228]: E9272709284: from=<[email protected]>, size=331, nrcpt=1 (queue active) Dec 8 05:56:03 xxxxxxxxxxxx postfix/smtp[20272]: E9272709284: to=<[email protected]>, relay=mailinator.com[72.51.33.80]:25, delay=1.1, delays=0.02/0.01/0.48/0.58, dsn=2.0.0, status=sent (250 Ok) FYI, I have reloaded postfix many times sudo postfix reload Note: This is on OSX 10.7.5

    Read the article

< Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >