Search Results

Search found 111524 results on 4461 pages for 'user mode linux'.

Page 85/4461 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • Decompressing Files on an NTFS Volume from Linux

    - by amphetamachine
    I recently did something stupid on my dual-boot laptop, where I compressed the entire volume to make room for a Linux partition. For some reason, Windows let me compress C:\ntldr. Now I need to get it uncompressed in order for Windows to boot. Here are some of the operating restrictions I have: I do not have access to the BIOS. I cannot boot from CD/USB/floppy. (I installed Linux through PXE) It does not have network access. Is there were some way to specify that the ntfs-3g driver shouldn't compress files even if it thinks it should (if the directory is compressed) when mounting the volume? Or, is there a way to modify the attributes of a directory using ntfstools?

    Read the article

  • Squid/Kerberos authentication with only Linux

    - by user28362
    Hi, I would like to know if it possible to let a Windows Xp machine authenticate to Squid (Linux) using Kerberos without the need of an Active Directory domain. I only want to create a Kerberos ticket on the client side, which should give the client access to squid (using I.E.). I only found tutorials about configuring A.D./Squid, not an environment with only Linux servers. Thanks Update: The kerberos setup is correctly done, the proxy and client can get tickets. As for the browser (FF/IE), I get: ERROR Cache Access Denied While trying to retrieve the URL: http://www.google.com/ The following error was encountered: * Cache Access Denied. Sorry, you are not currently allowed to request: http://www.google.com/ from this cache until you have authenticated yourself. In kerberos, I get: squid_kerb_auth: Got 'YR ElRNTVMTUABBAABAB4IIogAAAAAAAAAAAAAAAAAAAAAFASgDAAAADw==' from squid (length: 59). squid_kerb_auth: parseNegTokenInit failed with rc=101 squid_kerb_auth: received type 1 NTLM token This message is strange, as I didn't configure NTLM. It looks like the browser uses the wrong authentication methode.

    Read the article

  • Linux per-process resource limits - a deep Red Hat Mystery

    - by BobBanana
    I have my own multithreaded C program which scales in speed smoothly with the number of CPU cores.. I can run it with 1, 2, 3, etc threads and get linear speedup.. up to about 5.5x speed on a 6-core CPU on a Ubuntu Linux box. I had an opportunity to run the program on a very high end Sunfire x4450 with 4 quad-core Xeon processors, running Red Hat Enterprise Linux. I was eagerly anticipating seeing how fast the 16 cores could run my program with 16 threads.. But it runs at the same speed as just TWO threads! Much hair-pulling and debugging later, I see that my program really is creating all the threads, they really are running simultaneously, but the threads themselves are slower than they should be. 2 threads runs about 1.7x faster than 1, but 3, 4, 8, 10, 16 threads all run at just net 1.9x! I can see all the threads are running (not stalled or sleeping), they're just slow. To check that the HARDWARE wasn't at fault, I ran SIXTEEN copies of my program independently, simultaneously. They all ran at full speed. There really are 16 cores and they really do run at full speed and there really is enough RAM (in fact this machine has 64GB, and I only use 1GB per process). So, my question is if there's some OPERATING SYSTEM explanation, perhaps some per-process resource limit which automatically scales back thread scheduling to keep one process from hogging the machine. Clues are: My program does not access the disk or network. It's CPU limited. Its speed scales linearly on a single CPU box in Ubuntu Linux with a hexacore i7 for 1-6 threads. 6 threads is effectively 6x speedup. My program never runs faster than 2x speedup on this 16 core Sunfire Xeon box, for any number of threads from 2-16. Running 16 copies of my program single threaded runs perfectly, all 16 running at once at full speed. top shows 1600% of CPUs allocated. /proc/cpuinfo shows all 16 cores running at full 2.9GHz speed (not low frequency idle speed of 1.6GHz) There's 48GB of RAM free, it is not swapping. What's happening? Is there some process CPU limit policy? How could I measure it if so? What else could explain this behavior? Thanks for your ideas to solve this, the Great Xeon Slowdown Mystery of 2010!

    Read the article

  • User Control's Page Init Event Not Firing After Another User Control Using

    - by Murat
    I have an user control. I add dynamicly control to this control on its Page Init. This user control's parent Page's every postback, its page init works fine. After I added an different user control to Page. In Second User control, user select photos and uploads them with asyncfileupload control(AJAX). And I save all of them with a button at the end of Page. In EveryPostBack Parent Page Load event works. If I dont select any photo from Second User Control, First User Control's Page Init event works fine when I click Save Button. But if I select a photo, first User Control dont work. What is the problem?

    Read the article

  • What is the Best Free Linux Gateway

    - by rockinthesixstring
    I'm looking at moving away from using my DIR-825 as a gateway and moving into a Linux box to do it all for me. I've found IPCop, but I'm looking for something with a little more power. My main goal is basically to be able to point different external domain names to different internal servers. backup.example.com - 192.168.0.5 home.example.com - 192.168.0.1 I host my DNS on my own dedicated server (windows), so I don't know much about doing the gateway thing in my home (my hosting provider does it all for me). Do any of you know of any free Linux Distros that can accomplish what I'm looking for?

    Read the article

  • How to connect a USB GDI printer to Linux over a D-Link print server?

    - by jpe
    The setup is the following: +------------+ +-----------------+ +---------+ | HP LJ P1005|--USB--| D-Link DPR-1020 |---LAN---| PC Linux| +------------+ +-----------------+ + +---------+ | +------------+ +--| PC Windows | +------------+ HP LJ P1005 is one of those GDI printers that requires the printer driver to do most of the work for it and therefore is a bit "special". D-Link DPR-1020 is a print server with an Ethernet and an USB port that actually supports printing to challenged (read GDI) printers using a utility called PS-Link. What the utility does is basically mirror a USB port over the network to the print server so that the printer driver and the printer both are happy to talk to each other. The PC-s are notebooks that come and go, i.e. are not there all the time. Is there an equivalent of the D-Link PS-Link utility for Linux that could mirror a USB port over the network for a Linux host? And can the solution be used with D-Link DPR-1020? If not then I basically wasted the money buying the print server because the goal was to share a small printer among a couple of users with diverse operating systems in an office. The print server specs say that it supports Linux and LJ P1005, but the Catch 22 appears to be the solution used for GDI printers... It should be noted that it is possible to print from Linux to LJ P1005 directly over USB. This far sharing involved reconnecting the USB cable to appropriate computer to print. Now one of the desks is separated, so the cable does not work. Searching the net did not yield anything useful. Please do not suggest solutions involving a Windows machine (either virtual or not), my question is whether a solution only involving a Linux machine exists.

    Read the article

  • Segmentation in Linux : Segmentation & Paging are redundant?

    - by claws
    Hello, I'm reading "Understanding Linux Kernel". This is the snippet that explains how Linux uses Segmentation which I didn't understand. Segmentation has been included in 80 x 86 microprocessors to encourage programmers to split their applications into logically related entities, such as subroutines or global and local data areas. However, Linux uses segmentation in a very limited way. In fact, segmentation and paging are somewhat redundant, because both can be used to separate the physical address spaces of processes: segmentation can assign a different linear address space to each process, while paging can map the same linear address space into different physical address spaces. Linux prefers paging to segmentation for the following reasons: Memory management is simpler when all processes use the same segment register values that is, when they share the same set of linear addresses. One of the design objectives of Linux is portability to a wide range of architectures; RISC architectures in particular have limited support for segmentation. All Linux processes running in User Mode use the same pair of segments to address instructions and data. These segments are called user code segment and user data segment , respectively. Similarly, all Linux processes running in Kernel Mode use the same pair of segments to address instructions and data: they are called kernel code segment and kernel data segment , respectively. Table 2-3 shows the values of the Segment Descriptor fields for these four crucial segments. I'm unable to understand 1st and last paragraph.

    Read the article

  • File creation time on Windows vs Linux

    - by Sergei
    We have following setup: mountserver - debian linux fileserver1 - Windows 2008 R2 Storage server fileserver2 - Celerra NS20 exporting CIFS share workstation - windows 7 with mapped drive to share on fileserver2 What we are doing: mounted share from fileserver1 on mountserver, e.g. /shared/fileserver1 mounted share from fileserver2 on mountserver, e.g. /shared/fileserver2 ran rsync on mountserver to sync data from fileserver1 to fileserver2.Used atime as parameter to sync data not older than X after a while tried to delete data older that Y on /shared/fileserver2. From what I see, linux stat command on mountserver returns following when quering file on /shared/fileserver2: At the same time when I open property for the same file using mapped drive connected to fileserver2,I see following for the same file: As you can see, Created date of 12 August shown in Windows Explorer is nowhere to be seen using stat command Am I missing something here?

    Read the article

  • iphone Memory gets freed in debug mode but not in release mode

    - by gdr
    I have been testing my iPhone debug build on both my device and simulator with activity monitor, leaks, and object allocations. The code is pretty well optimized so I have decided to test the release build. I went into the project Menu and set the target build to be release, I then added the necessary header paths that my app is using to the headers search paths and ran the release build on the device with the above mentioned instruments. What I have noticed now is that memory that was freed when I used the debug build does not get freed when using release version. There is one place in my App that I remove a scroll view with some images which frees up a significant amount of memory when I use the debug build, but no memory is freed up in that place when I use the release version. Does someone have any ideas where I need to start looking at? did I setup my release build wrong?

    Read the article

  • Advantages of Using Linux as primary developer desktop

    - by Nick N
    I want to get some input on some of the advantages of why developers should and need to use Linux as their primary development desktop on a daily basic as opposed to using Windows. This is particulary helpful when your Dev, QA, and Production environments are Linux. The current analogy that I keep coming back to is. If I build my demo car as a Ford Escort, but my project car is a Ford Mustang, it doesn't make sense at all. I'm currently at an IT department that allows dual boot with Windows and Linux, but some run Linux while the vast majority use Windows. Here's several advantages that I've came up with since using Linux as a primary desktop. Same Exact operating system as Dev, QA, and Production Same Scripts (.sh) instead of maintaining (.bat and *.sh). Somewhat mitigated by using cygwin, but still a bit different. Team learns simple commands such as: cd, ls, cat, top Team learns Advanced commands like: pkill, pgrep, chmod, su, sudo, ssh, scp Full access to installs typically for Linux, such as RPM, DEB installs just like the target environments. The list could go on and on, but I want to get some feedback of anything that I may have missed, or even any disadvantages (of course there are some). To me it makes sense to migrate an entire team over to using Linux, and using Virtual Box, running Windows XP VM's to test functional items that 95% of most of the world uses. This is similar but a little different thread going on here as well. link text

    Read the article

  • Is a ext3 Linux filesystem byte order independent

    - by Lothar
    I have a good old HP-C3700 Workstation with PA-RISC CPU here that I would like to use as a subversion server for a very large repository. I just worry what happens if the workstation dies (everybody who knows this computer knows that it is running like an Abrams tank and unlikely to happen in the next decade). I'm using Debian Linux on this system. If the mainboard dies can I just plug the SCSI drive into a PC and read the files from a normal Intel Linux PC? Which software RAID levels would be safe?

    Read the article

  • C++/CLI Mixed Mode DLL Creation

    - by Adam Haile
    I've got a native C++ DLL that I would like to have a C++/CLI wrapper layer for. From what I understood, if you simple added a C++/CLI class to the project, VS would compile as mixed mode, but I was apparently wrong as VS doesn't seem to be even touching the managed code. So, given a pre-existing native code-base what exactly, step-by-step, do you need to do to create a mixed mode DLL, so that I can can link into that code from any .NET language? *I need to do this because my native code uses C++ classes that I cannot P/Invoke into.

    Read the article

  • setting up a WGR614v7 behind a linux box

    - by commodore fancypants
    Here's the setup, I have an openSUSE box with 2 NICs, one goes to my home network router, the other has DHCP running and it attached to a wireless router. I'm trying to get this setup to work before I switch to the linux box as my home network router. My DHCP will offer the wireless router (a WGR614v7) an address, but anything that connects to the wireless router ends up with a APIPA address. I have all the firewalls on the wireless network turned off as well as the wireless router's own DHCP. The linux box isn't offering addresses to anything past the wireless router. Is this a problem with the router or my DHCP setup? For testing purposes, I have both NICs set in the internal zone and I've tried wireless and wired connections to the WGR614v7 both to no avail.

    Read the article

  • Linux servers seeing bad download performance behind Sonicwall firewall

    - by Joshua Penix
    I'm working with a pair of co-located CentOS Linux servers sitting behind a Sonicwall PRO 2040 Enhanced firewall running in transparent bridge mode. These servers are having a strange problem downloading files more than a few megabytes in size. For example, if I try to wget or FTP a copy of the Linux kernel from kernel.org, the first ~1-2MB will download at 600+K/s, and then throughput will drop off a cliff to 1K/s. I've reviewed all the firewall configuration settings for anything suspicious, but found nothing. More interestingly, I performed the same download with a Windows server sitting behind the same firewall, and it sailed right through at 600+K/s the whole way. Has anyone seen this? Where should I start looking to troubleshoot this problem?

    Read the article

  • Linux Security/Sysadmin Courses in London?

    - by mister k
    Hi, My employer has offered to send me on a couple of training courses and I'm just looking for some recommendations. I'm mainly looking to improve my security and general sysadmin skills. I would like to do something focused on UNIX as I mainly work with Linux boxes (but also a couple of FreeBSD boxes). I don't want to do a study-from-home course, so I would need to find somewhere based in London. It would be great to hear from anyone who has some experience with this kind of course. The courses I've found so far are: www.learningtree.co.uk/courses/uk433.htm www.city.ac.uk/cae/cfa/computing/systems_it/linux.html www.city.ac.uk/cae/cfa/computing/systems_it/unix_tools_ss.html I'm not sure the City University courses are advanced enough as I already have experience... Thanks!

    Read the article

  • Create NTFS symbolic links from within Linux

    - by rymo
    Is there a Linux utility that can create NTFS symbolic links? That is, a link on an NTFS partition that points to another NTFS folder - one that will work within Windows 7, specifically. I wish to relocate a folder that is normally in-use while Windows is running. This machine can already dual-boot into Ubuntu, so I'd like to leverage that. EDIT: To keep this from potentially turning into "which Windows Live CD is best", I will limit this question to "Is it possible with Linux, yes or no?"

    Read the article

  • network user isolation

    - by seaquest
    My question is for a network with a Linux iptables router gateway. How can it be possible to prevent inter-network traffic of those users. Think this case as a public network, IPs are distributed through linux gw and users are authenticated thru the gateway. We want to protect public users from public users. Network is not wireless and I can not use Wireless AP user isolation. Actually I have a simple method. Subnet the network into /30 mask. Give minimum IP of each subnet to the gateay and ditribute those /30 IPs from the subnet. But this is pretty costly for such an aim. I want to ask for other methods Thanks.

    Read the article

  • mirroring linux server to external usb harddrive

    - by DuPie
    My google-fu must be sucking. i havent been able to find a good solution for the following: numerous Linux server on commodity hardware Trying to do a recovery mirror copy to external harddrives External harddrives are smaller than source harddrives, but larger than data External drives are connected via usb2 (slow) Servers range from 20GB of data to 400GB of data Servers are remote, so hands on access is a pain need to copy boot files. empty external drives currently Basicly, looking for a way to do use a ghosting solution from INSIDE a running linux server to an external harddrive, without booting a cd etc. the rsync/cpio solutions i've looked at dont work great with grub/dev/proc etc. I understand that since the system isnt offline, it wont be a "mirror" image as files change, but thats ok. Are there any free/commercial products that would work?

    Read the article

  • is AV software needed for Windows XP Mode on Windows 7?

    - by Wesley
    Hi all, I just got Windows XP Mode on my Windows 7 Professional 64-bit machine. I notice that the Windows XP Security Center notes that I don't have any virus protection. However, I'm running AVG Free on Windows 7. Do I need to have some sort of anti-virus program installed on Windows XP Mode? Thanks in advance.

    Read the article

  • mounting vsphere 4.0 file system in ubuntu linux

    - by sravan
    hi all, I am using VSpere 4.0 for my project work. I needed 4-5 servers my project work which is based on Database. I felt the virtualisation is very good to get the 5 servers running good at the same time. It was running good until few days back. Yesterday, it suddenly crashed and i had no idea of the reason.Today, it did not even boot up. Now, i need to take the data backup from that system. In order to do the same, i got the hard drive from the machine and tried to mount it on local linux machine.But, i was not successful. The disk was not even recognized by the linux machine. Can some one please tell me how to mount it and get the required data? Thank you all

    Read the article

  • Java Swing app hangs when run in normal mode but runs fine in debug mode

    - by snocorp
    I am writing a basic Java application with a Swing front-end. Basically it loads some data from a Derby database via Apache Cayenne and then displays it in a JTable. I'm doing my development in Eclipse and I don't think it's important but I'm using Maven for dependencies. Now this works fine when I run using Debug but it seems to hang the display thread when I use the Run button. I've done a thread dump and I'm not 100% certain but everything looks good. I used Java VisualVM and the threads look fine there as well. Strangely it seems to work intermittently. It's pretty consistent though and easy to reproduce. If anyone has any ideas, I'm all out of them.

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >