Search Results

Search found 42646 results on 1706 pages for 'vbox question'.

Page 344/1706 | < Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >

  • Limiting access in Silverlight\Pivotviewer

    - by sparaflAsh
    I'm going to deploy a pivotviewer application. As some of you might know this silverlight application load a .cxml index file for a group of images. My need is to make .cxml file and image files not accessible for the user. Now, if I don't have a need I usually code like this in C# and the file is hosted in the documentroot: _cxml = new CxmlCollectionSource(new Uri("http://www.myurl.it/Collection.cxml", UriKind.Absolute)); This means that my cxml and then the images are available by http for everyone who knows the URI. I'm a newbie to server configuration, so any help/hint would be deeply appreciated. Someone suggested me to take the files out of the root, but it seems like I can't go to pick them up if they are not a URL in Silverlight. At least I didn't managed to understand how. Someone else suggested me to play with web.config file to hide URLs, but I don't really know where to start. My question is, what's the best practice to hide my stuff? Obviously I can edit the question if you need more details.

    Read the article

  • Load Sharing Regarding Large Websites

    - by JHarley1
    Hello, I have a question regarding Load Sharing for large websites. My Understanding: So if you have a website that has millions of fits a day you will need to have an architecture that can support this sort of pressure. You can either do one or two things: Invest in a single large server that has huge amounts of processing power, memory and storage (such as Microsoft's TerraServer). Spread the load of your website across a number of machines. Let me tackle the second approach, so you have a collection of machines all running Web Server Software and all having access to identical copies of the websites pages. You can either spread the load across these machines using a cyclic pattern in a DNS or you can use a Load Ballancing Switch. The advantages of this approach is: - Redundancy - servers can fail and the others would "pick up the slack" - Incremental - the ability to easily add new machines to this set-up. My Question's Is there a Virtual approach to this issue of load balancing now? If the website runs from a database - is there still only a single copy of the database? If a user had a session running on one Server (e.g. they had gone to www.example.org and had been assigned to Server 2 - were they had created a session) if they refreshed the website (and were allocated Server 3) would they still have their session? What are the other disadvantages associated with Load Balancing? Many Thanks, J

    Read the article

  • Setup staging with multiple SVN

    - by Kapil Sharma
    We are a startup, setting new environments for product to be released soon. Planned server structure with planned release flow is as shown in below image It ideally have a local server (or Staging server, shown in green) in local office, without public IP address and Production Server (Red) at Amazon EC2. Both local and production server have there own SVN copy. Management here want to update production server with production SVN and without providing its access to developers (including freelancers/contract employees). So for developers, there is a Local SVN on local server. Another purpose of local SVN to keep a copy of code on local server, which is under our direct control. Although there are some technical concerns like how will code at local server will be updated from local SVN and commit on production SVN but bigger question is, is that structure correct? Major requirement remain don't provide production SVN access to developers. What are other possible options to achieve that? Another minor question, if suitable here, if above structure is correct, is it possible for a SVN checkout to get updated from one SVN (Local SVN) but commit to other (Production SVN)? If yes, How? edit An answer has been accepted but for bounty, I'm still looking for answer Is that structure correct? Its pros/Cons? Technical solution is already provided by accepted answer.

    Read the article

  • How should I monitor memory usage/performance in SunOS/Solaris?

    - by exhuma
    Last week we decided to add some SunOS (uname -a = SunOS bbs-sam-belair 5.10 Generic_127128-11 i86pc i386 i86pc) machines into our running munin instance. First off, the machines are pre-configured appliances, so, I want to avoid touching the system too much without supervision of the service provider. But adding it to munin was fairly easy by writing a small socket-service (if anyone is interested, I put it up on github: https://github.com/munin-monitoring/contrib/tree/master/tools/pypmmn) Yesterday, I implemented/adapted the required plugins for our machines. And here the questions start: First, I have not found a way to determine detailed memory usage values. I get the total memory by running prtconf | grep Memory, and the free memory using vmstat. Fiddling together a munin-plugin, gives me the following graph: This is pretty much uninformative. Compare this to the default plugin for linux nodes which has a lot more detail: Most importantly, this shows me how much memory is actually used by applications. So, first question: Is it possible to get detailed memory information on SunOS with the default system tools (i.e. not using top)? Onto the next puzzle: Seeing the graphs, I noticed activity in the "Paging in/out" graphs, even though the memory graph still has unused memory: Upon further investigation, I found out that df reports that /tmp is mounted on swap. Drilling around on the web, I understood that df will display swap, but in fact, it's mounted as a tmpfs. Now I don't know if this explains the swap activity. The default munin-plugin for solaris uses kstat -p -c misc -m cpu_stat to get these values. I find it already strange that this is using the cpu_stat module. So maybe I simply misinterpret the "paging" graphs? Second question: Do the paging graphs indicate that parts of the memory are paged to disk? Or is the activity caused by file operations in /tmp?

    Read the article

  • How to setup Mac server to use two gateways

    - by Brady
    I recently asked this question: How to set Mac server to use different Gateway for internet bound traffic The answer given works but has presented me with another issue that I didnt make clear in that question. Here is my network layout as it stands: At the moment outside staff members use some services on the existing internet 1 link. Those services are hosted by the Mac server. If I change the gateway of the Mac server to the second modem those outside staff lose visabilty on those services. Now I dont know how to go about solving this issue. I want the second link to be used when the Mac server goes to rsync data offsite but everything else use link one. How do I do this? Thanks Scott EDIT: This has been resolved by setting the default gateway on the Mac server to 192.168.1.254 Thus leaving everything on the network as it was before. but to get the Mac server to use the other link for rsync I've added a route to the Mac server to route traffic to the rsync server through the second gateway. sudo route add -net {server IP's}/{Netmask} 192.168.1.1 I've awarded the answer to gravyface for pointing me to a post on how to make this route persistant in Mac

    Read the article

  • Why a 10 years old software still is so slow even today?

    - by Cawas
    I just noted this question due to a game (which happens to be Diablo 2), but the matter of fact is: why is my brand new mac book pro, made in 2009 with latest technology (tho it's the cheapest one) can't rival my computer which used to run this much faster back in 2000? Really, it was much faster on my AMD K6 450 back in those days, and I could even run two clients at same time with no slow down. I've always had the feeling this machine was slow, but this is a very odd way to attest it. Granted, the machine is smaller, runs on wifi and "boots" way faster thanks to sleep mode. But other than that, what have we evolved after all?! I'm pretty sure this shouldn't be graphical card's fault. Sure if I buy latest technology it will run fast, and probably most people here can confirm this and won't even understand my question. But the thing is, all the hardware is supposedly much faster and better than the stuff from 10 years ago. The software and operating system became more complex, but also more well refined. Now I'm trying a piece of software that is actually 10 years old and it's not getting any better results! Why?

    Read the article

  • SSL stops working on IIS7 after a reboot

    - by Mark Seemann
    I have a Windows 2008 Server with IIS7. Every time the server reboots, SSL stops working. Normal HTTP requests work fine, but any request to an HTTPS address gives the typical error message in the browser: Cannot find server or DNS I can temporarily fix it by opening IIS Manager and bring up the Bindings… window for the website in question. Then I select “https”, click on “Edit” then click “Ok” without making any changes to the settings. After doing this, browsing to https:// works again until the next reboot. This issue look as lot like the one described here, but according to the Certificates MMC snapin, the certificate in question does have a private key. I'm also pretty sure that I never installed the certificate in the personal store, but imported it straight into the machine store, but it's been a while... There's not a lot in the event log apart from the event ID 36870 also described in the post I linked to. Can anyone help me troubleshoot this issue so that SSL will work even after a server reboot?

    Read the article

  • Why can't I specify the executable that opens file with extension on windows?

    - by Glen S. Dalton
    I am on windows server 2003, but I guess it is the same on windows xp. This is a superuser question, because it is definitly desktop, so do not move or close it. Question: I copied some movable applications (usually people create them for usb sticks) to locations like c:\bin\app1\app1.exe app1.exe can open files of type *.ap1 When I rightclick file.ap1 and choose "open with ..." the "Open with" dialog appears. But it is not working how I expect in this situation. I can choose c:\bin\app1\app1.exe with the "Browse" button, but: app1.exe will not appear in the dialog where I just choosed it in the programs list, like I am used to it after clicking OK on it in the browse dialog. app1.exe will not open it when I click ok in the "Open with" dialog, the application that was assigned until then will still open it What could be the reason? Edit: Additional Information: my account is member of the administrators group I just changed the permissions of the folder c:\bin\app1\ and made sure that the group "Administrators" has all rights. I also inherited this manually to all subfodlers and subfiles.

    Read the article

  • How to configure multiple virtual hosts for multiple users on Linux/Apache2.2

    - by authentictech
    I want to set up a virtual hosting server on Linux/Apache2.2 that allows multiple users to set up multiple website domains as would be appropriate for commercial shared hosting. I have seen examples (from my then perspective as a shared hosting customer) that allow users to store their web files in their user home directory with directories to correspond to the virtual host domain, e.g.: /home/user1/www/example1.com /home/user2/www/example2.com instead of using /var/www Questions: How would you configure this in your Apache configuration files? (Don't worry about DNS) Is this the best way to manage multiple virtual hosts? Are there others? What safety or security issues do you think I should be aware of in doing this? Many thanks, folks. Edit: If you want to only answer question 1, please feel free, as that is the most urgent to me at this moment and I would consider that an answer to the question. I have done it for myself since posting, but I am not confident that it's the best solution and I would like to know how an experienced sysadmin would do it. Thanks.

    Read the article

  • SSH X11 forwarding does not work. Why?

    - by Ole Tange
    This is a debugging question. When you ask for clarification please make sure it is not already covered below. I have 4 machines: Z, A, N, and M. To get to A you have to log into Z first. To get to M you have to log into N first. The following works: ssh -X Z xclock ssh -X Z ssh -X Z xclock ssh -X Z ssh -X A xclock ssh -X N xclock ssh -X N ssh -X N xclock But this does not: ssh -X N ssh -X M xclock Error: Can't open display: The $DISPLAY is clearly not set when logging in to M. The question is why? Z and A share same NFS-homedir. N and M share the same NFS-homedir. N's sshd runs on a non standard port. $ grep X11 <(ssh Z cat /etc/ssh/ssh_config) ForwardX11 yes # ForwardX11Trusted yes $ grep X11 <(ssh N cat /etc/ssh/ssh_config) ForwardX11 yes # ForwardX11Trusted yes N:/etc/ssh/ssh_config == Z:/etc/ssh/ssh_config and M:/etc/ssh/ssh_config == A:/etc/ssh/ssh_config /etc/ssh/sshd_config is the same for all 4 machines (apart from Port and login permissions for certain groups). If I forward M's ssh port to my local machine it still does not work: terminal1$ ssh -L 8888:M:22 N terminal2$ ssh -X -p 8888 localhost xclock Error: Can't open display: A:.Xauthority contains A, but M:.Xauthority does not contain M. xauth is installed in /usr/bin/xauth on both A and M. xauth is being run when logging in to A but not when logging in to M. ssh -vvv does not complain about X11 or xauth when logging in to A and M. Both say: debug2: x11_get_proto: /usr/bin/xauth list :0 2>/dev/null debug1: Requesting X11 forwarding with authentication spoofing. debug2: channel 0: request x11-req confirm 0 debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. I have a feeling the problem may be related to M missing in M:.Xauthority (caused by xauth not being run) or that $DISPLAY is somehow being disabled by a login script, but I cannot figure out what is wrong.

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • Fix MBR from installed Windows Vista

    - by Danilo
    Hi guys, I have a quite strange problem. I had a system with Vista and Ubuntu installed. We always use Vista and Ubuntu was something we really did not need. BUT: to boot, GRUB was used (I guess grub2). Now, while being in Vista I cancelled the Ubuntu partition and with it also GRUB. Now the system does not boot anymore. I tried to reinstall Ubuntu, but I had some problems with the CD. At the moment, when the system boots I get into the GRUB shell. From there, I am able to boot Windows Vista with some commands like this ones: grub> title windows rootnoverify (hd0,msdos3) chainloader +1 boot Now the question is: if I am able to boot in Windows Vista with this trick, is it possible to fix the MBR from inside the installed windows Vista with some command/tool of Vista itself? I shall probably mention that we are not interested in double boot at the moment. We only want Vista to start. I can sum up the question like this: is there a way to fix the MBR from the installed version on Windows Vista, considering that GRUB is at the moment installed? I hope I was clear enough. Thanks for your help.

    Read the article

  • Massive SQL issue shutting down our site.

    - by Pselus
    Our website has started timing out like crazy today. All of our clients are finding it unusable. The only error we can seem to trace down as a potential problem is this: SQLAllocHandle on SQL_HANDLE_DBC failed Error ASP Description Error Category Microsoft OLE DB Provider for ODBC Drivers I have no idea what it means or how to go about fixing it. Anyone ever encountered this error before? Currently, you can log in to our site, but then once you go to do anything else, you find yourself logged out or nothing happens. We have a lot of Ajax going on so the "nothing happens" probably has to do with the Ajax pages not loading properly due to logouts and so nothing displays to the user. Like I said, I'm at a loss. Anyone have any advice on this error? EDIT I realize that this isn't necessarily a programming question, but we are a small startup company that just yesterday started talking about how we need to get a backup server running. Apparently we talked about it too late. We don't have a DBA, just 2 mid level programmers trying their hardest to keep our clients happy. So please, if you have any assistance give it but please don't close my question right now. EDIT 2 Turns out we had something on our server running called "ServerMask" that makes our IIS server look like Apache to the outside world. Shutting it down fixed our issue. Still no idea why it was messing up but it was the problem apparently. Thanks to everyone who tried to help.

    Read the article

  • Allow opening a new tab with Ctrl+T on all websites in Firefox

    - by Martin J.H.
    In Firefox, certain websites and plugins (Adobe PDF Plugin) appear to "capture" the Control key, so that when I try to open a new tab using "Ctrl+t", nothing happens - or worse, something unexpected happens. Examples: On the Codecademy site, while editing code, Ctrl+T either does nothing, or (when Flash is disabled) switches the position of the two characters next to the cursor. When viewing PDF's with the Adobe PDF Plugin, Ctrl+T does nothing. Is there a way to disable this "feature"? I would like "Ctrl+t" to always "talk" to Firefox! Edit: After searching superuser deeper, this question is very similar to the questions: "How to prevent keystroke grabbing/hijacking by websites in Firefox?" "How do I prevent pages I visit from overriding selected Firefox shortcut keys?". The answers to these questions are interesting and relevant, but do not give a method on how to disable combinatinos such as "Ctrl+t". Maybe a modified Greasemonkey script is the easiest solultion. Edit 2 - Attempt at a solution The following UserScript (Use GreaseMonkey to install it) successfully captures Ctrl+t on some sites (Google Search site, for instance - PopUp "Gotcha" appears), but not on the Codecademy site. I found another question pertaining to this subject here: "How to forbid keyboard shortcut stealing by websites in Firefox". It was raised in 2010, and the consensus was: It can't be done. // ==UserScript== // @name Disable Ctrl T interceptions // @description Stop websites from highjacking keyboard shortcuts // // @run-at document-start // @include * // @grant none // ==/UserScript== // Keycode for 't'. Add more to disable other ctrl+X interceptions keycodes = [84]; var lastPressedButton = [0]; document.addEventListener('keydown', function(e) { //uncomment to find out the keycode for any given key // alert(e.keyCode ); if (keycodes.indexOf(e.keyCode) != -1 && e.ctrlKey) { e.cancelBubble = true; e.stopImmediatePropagation(); alert("Gotcha!"); } return false; });

    Read the article

  • Shared files folder in Amazon Elastic Beanstalk environment

    - by por
    I'm working on a Drupal application, which is planned to be hosted in Amazon Elastic Beanstalk environment. Basically, Elastic Beanstalk enables the application to scale automatically by starting additional web server instances based on predefined rules. The shared database is running on an Amazon RDS instance, which all instances can access properly. The problem is the shared files folder (sites/default/files). We're using git as SCM, and with it we're able to deploy new versions by executing $ git aws.push. In the background Elastic Beanstalk automatically deletes ($ rm -rf) the current codebase from all servers running in the environment, and deploys the new version. The plan was to use S3 (s3fs) for shared files in the staging environment, and NFS in the production environment. We've managed to set up the environment to the extent where the shared files folder is mounted after a reboot properly. But... The Problem is that, in this setup, the deployment of new versions on running instances fail because $ rm -rf can't remove the mounted directory, and as result, the entire environment goes down and we need restart the environment, which isn't really an elegant solution. Question #1 is that what would be the proper way to manage shared files in this kind of deployment? Are you running such an environment? How did you solve the problem? By looking at Elastic Beanstalk Hostmanager code (Ruby) there seems be a way to hook our functionality (unmount if mounted in pre-deploy and mount in post-deploy) into Hostmanager (/opt/hostmanager/srv/lib/elasticbeanstalk/hostmanager/applications/phpapplication.rb) but the scripts defined in the file (i.e. /tmp/php_post_deploy_app.sh) don't seem to be working. That might be because our Ruby skills are non-existent. Question #2 is that did you manage to hook your functionality in Hostmanager in a portable way (i.e. by not changing the core Hostmanager files)?

    Read the article

  • Debian DNSSEC - howto secure a domain?

    - by Daniel Marschall
    I have a beginner question about DNSSEC. I have much experience with TLS and cryptography-stuff and would like to try out this new technology. I have googled very much about this but I haven't found useful information for me. I think one confusion in information gathering is that "Debian howto DNSSEC setup" can mean "How to USE DNSSEC for resolving" OR "How to secure your domain with DNSSEC". I am searching the second. I am running a Debian Squeeze server with root privileges which has a domain name ending with ".de" (which is already signed by the root zone). The network interface at this server uses the gateway IP (DNS resolver?) of the datacentre the server is running on. My domain is hosted at freedns.afraid.org , where I can add DNS RRs for my domain. They are currently NOT capable of adding DNSSEC RRs, but I am bugging them to support this soon. ;-) My simple question is: How do I setup DNSSEC on Debian? Resp. who have I ask to? As far as I understand, all I have to do is to run dnssec-keygen on my Debian server and then add the key to my DNS-provider as DNSSEC RR. (And change it every 30 days?) I have looked at this http://www.isc.org/files/DNSSEC_in_6_minutes.pdf but it looks like you have to be the owner of a ZONE, so I don't think this applies to me. Who needs to sign my domain? My DNS-provider or my zone (DeNIC) or can I do it myself? Any help is very appreciated!

    Read the article

  • Repository bugzilla package changed to bugzilla3 in Lenny; upgradable?

    - by Pukku
    This question was asked in debianhelp.org almost half a year ago, but never got an answer. I wasn't the one who posted it, however I was today facing exactly the same question. Not sure if copying it to here as such is considered as inappropriate or something, but there's not really anything that I would even like to paraphrase... So let's just go. (I'm sure you will be happy to close it, if this is not the way to go :) Hello all! We are using a Bugzilla server install on a Debian 4/Etch server and are starting to look at the upgrade to Debian 5/Lenny. I was hoping to upgrade the existing Bugzilla server and database from the oldstable (v2.22) to the newer stable in Lenny (v3) when we get to doing a dist-upgrade. However from testing in a virtual machine it seems that the old package was called "Bugzilla" whereas the Lenny package is called "Bugzilla3" and I could not figure a way to directly upgrade between the two. Is it possible to establish some kind of upgrade path quickly after the dist-upgrade to minimise downtime using apt-get or aptitude? Going on past experiences I would not want to do a fresh install with the Bugzilla3 package and attempt to inject the old database into it (previous attempts failed miserably!) :(

    Read the article

  • SQL 2K5 - Multiple databases vs. Multiple files

    - by Bob Palmer
    Hey all, quick question. Our current legacy system was built using multiple distinct databases (about ten of them). These are all part of the same discreet system, and a large number of SPs and functionalty span multiple databases. There are also key relationships that span (for example, a header table may be in database A with history, etc. in database B). When deploying multiple copies of our app to the same server therefore, we have to use multiple instances (because the database names are coded into so many sprocs). We're evaluating the idea of taking these ten databases (about 30gb total with individual sizes ranging from 100mb to 10gb) and merging them into a single database. Currently, we have our databases spread accross multiple spindles for better IO. The question I have is whether or not there is any performance loss or benefit of having 10 different databases vs. 10 different database files? i.e. rather than having three databases (A, B, and C) Disk D: A.mdf (1gb) Disk E: B.mdf (4gb) Disk F: C.mdf (10gb) Disk G: A_Log.ldf, B_Log.ldf, C_Log.ldf have one database (X) Disk D: X1.mdf (5gb) Disk E: X2.mdf (5gb) Disk F: X3.mdf (5gb) Disk G: X1_log.ldf,X2_log.ldf,X3_log.ldf Thanks! -Bob

    Read the article

  • Ubuntu server or Debian server (to run C++ apps developed on Ubuntu)

    - by skyeagle
    I have written a number of C++ server side daemons for my website, using my Ubuntu 9.10 dev machine. The C++ apps I mentioned above are "GUI-less" daemons (and libraries used by the daemons). I am now about to host my website and need to decide whether to go with Debian server or Ubuntu server. In a nutshell, here is the situation: I developed on Ubuntu desktop because I preferred the more friendly GUI I would like to deploy on Debian Server because of the (perceived?) robustness of the Debian server over Ubuntu server (I may be totally wrong here - and in fact, this is really what this question is all about) If Debian server is indeed more robust than Ubuntu server, then I have no choice but to go with Debian server - BUT, will my Ubuntu developed C++ apps run on the server? (or do I need to recompile them on the server? (I'd HATE to have to do this, because I want to keep the server machine clean and light - no GUI, no dev tools etc). This last question is really about binary compatability between Ubuntu and Debian. I want the server to be robust, secure and stable, and simply act as a server (i.e. LAMP and very little else - no GUI etc). Given that requirement, and the fact that I need to run my C++ apps (developed on Ubuntu 9.10), I need advice on which OS to choose for the server. Ideally, any advice will be backed with a reason. I am particularly interested in hearing from people who have been in an identical situation, or done something similar.

    Read the article

  • how to maitain the authentication details/passwords in a 50 people company

    - by sabya
    What is the process that you guys follow to maintain authentication details like login ids and passwords? There will be definitely some shared passwords. So, the target is to minimize the impact when someone is leaving the company. By "shared password", I mean, the account, which is shared among multiple people in the company. The issues that the process should address are: - Affected areas. Quickly find the resources to which the leaving user was having access to. Forgetting password. What happens if a user forgets an authentication details? How does he get it? I think he shouldn't ask a team mate. I mean no-verbal communication. Find dependencies of a resource. Suppose I am changing the password for a mail account, which is getting used by some automated scripts to send mails. Here, the scripts are dependent on the mail account, so changing the password of the mail account means we have to change the password in the script too. So, how do find all the dependencies of a resource? I'd prefer a process which addresses these issues. But you can also recommend products which are open source and not hosted. I have gone through PassPack, but they don't solve #4. There is a similar question here. But that does not exactly answer my question.

    Read the article

  • Surprising corruption and never-ending fsck after resizing a filesystem.

    - by Steve Kemp
    System in question has Debian Lenny installed, running a 2.65.27.38 kernel. System has 16Gb memory, and 8x1Tb drives running behind a 3Ware RAID card. The storage is managed via LVM. Short version: Running a KVM guest which had 1.7Tb storage allocated to it. The guest was reaching a full-disk. So we decided to resize the disk that it was running upon We're pretty familiar with LVM, and KVM, so we figured this would be a painless operation: Stop the KVM guest. Extend the size of the LVM partition: "lvextend -L+500Gb ..." Check the filesystem : "e2fsck -f /dev/mapper/..." Resize the filesystem: "resize2fs /dev/mapper/" Start the guest. The guest booted successfully, and running "df" showed the extra space, however a short time later the system decided to remount the filesystem read-only, without any explicit indication of error. Being paranoid we shut the guest down and ran the filesystem check again, given the new size of the filesystem we expected this to take a while, however it has now been running for 24 hours and there is no indication of how long it will take. Using strace I can see the fsck is "doing stuff", similarly running "vmstat 1" I can see that there are a lot of block input/output operations occurring. So now my question is threefold: Has anybody come across a similar situation? Generally we've done this kind of resize in the past with zero issues. What is the most likely cause? (3Ware card shows the RAID arrays of the backing stores as being A-OK, the host system hasn't rebooted and nothing in dmesg looks important/unusual) Ignoring brtfs + ext3 (not mature enough to trust) should we make our larger partitions in a different filesystem in the future to avoid either this corruption (whatever the cause) or reduce the fsck time? xfs seems like the obvious candidate?

    Read the article

  • How to debug a kernel created using ubuntu-vm-builder?

    - by user265592
    Aim: Trying to perform a code walkthrough of what functions are getting called for sending and receiving packets over the network. I am building a kernel and using gdb for debugging/ tracing purposes. I have build a vm using the following command : time sudo ubuntu-vm-builder qemu precise --arch 'amd64' --mem '1024' --rootsize '4096' --swapsize '1024' --kernel-flavour 'generic' --hostname 'ubuntu' --components 'main' --name 'Bob' --user 'ubuntu' --pass 'ubuntu' --bridge 'br0' --libvirt 'qemu:///system' And I can run the VM successfully in qemu using the following command: qemu-system-x86_64 -smp 1 -drive file=tmpGgEOzK.qcow2 "$@" -net nic -net user -serial stdio -redir tcp:2222::22 Now, I want to debug the kernel using gdb. For this I need an executable with debug symbols(vmlinux), which apparently I don't have, as the vm-builder never asked for any such options and simply created a .qcow2 file. Question 1: Am I taking the correct approach to solve the problem and is there an easier way to do it? Question 2: Is there a way to debug this kernel using GDB? P.S: I don't have hardware support for KVM. Please correct me if I am wrong. Thanks.

    Read the article

  • Oracle with Kerberos authentication and Windows 2003 Server as KDC

    - by Supaplex
    Hello everyone. I am running Oracle 10.2 on a Windows 2003 Server SP2 which is also the domain controller on the network. I wish to switch authentication method from NTS to Kerberos. I have spent a lot of time trying to configure Oracle with Kerberos authentication from the Oracle Advanced Security option from the Net Manager utility. I have disabled NTS so Kerberos is promoted as the preferred authentication method. But as soon as the configuration is saved from Net Manager and I restart the Oracle server service, Oracle will not start. I don't know what Oracle is complaining about, because I don't know where to look for the Oracle error log. My first question is: how can I figure out what's bugging Oracle? My second question: is there a good tutorial for setting up Oracle on a Windows 2003 with Kerberos Authentication, where the Windows 2003 Server is the KDC? Maybe there is a book I can get? I have read Oracles own guide, but it is mostly for Linux/Unix. Thanks a lot!

    Read the article

  • BIND9 server types

    - by aGr
    I was configuring DNS on my server using BIND9, everything seems to work, but I have a question regarding my config file. I've ended up with this configuration in /etc/bind/named.conf.local zone "example.com" { type master; file "/etc/bind/db.example.com"; allow-transfer { 192.168.1.1; }; }; zone "1.168.192.in-addr.arpa" { type master; notify no; file "/etc/bind/db.192"; allow-transfer { 192.168.1.1; }; }; forwarders { 10.253.22.140; 10.253.22.141; }; I've read about the different type of dns server, like primary master etc. The first two parts (zone and zone) corresponds to primary dns server configuration. First record for "classic" lookup, second one for reverse. The last part (forwarders) is configuration of cache-server and contains the ISP's IP of DNS server. So all names resolved thanks to this server will be cached. Simple question: am I right? Does my description make sense? Or one server can be only either master or either cached?

    Read the article

  • AMD 700, 800 series chipset. I'm lost.

    - by Shiki
    I've been an Intel / NVidia user ever since I started using computers. Intel really gone up with the prices, and they won't get cheaper. So I decided to get an AMD. But WHICH one? I mean.. not shopping question but.. what are the differences? Like: 880GMA comes only with a single PCI ex and it looks like a chinese replica (no offense). While 890FX comes with 5PCI-ex for QuadCrossfire. Also.. what's the deal with 7xx series? I mean.. its the same price. Yet its older? Or why is it 7xx? Isn't there a single chipset between? Not chinese YET it's durable/fine for long-term usage? What it should know (desktop stuff): NVidia GPU (Zalman AMP2 GTX 260^2 (one card)) Phenom 1090T cpu A somewhat good audio. Any ideas which is the chipset I'm searching for? If this sounds too much of a shopping question, feel free to edit. I just want some clarification on these chipsets.

    Read the article

< Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >