Search Results

Search found 19017 results on 761 pages for 'purchase order'.

Page 376/761 | < Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >

  • Deploy Rails app from Hudson

    - by brad
    I'm using hudson as my CI and it works great, builds run their tests, code metrics, all that good stuff. But at the moment, that's it, no automated deployment, I have to manually do that after. I haven't found any sort of capistrano plugin for hudson and I can't even see where I can just run my cap deploy after a successful build in Hudson. Does anyone have any idea what I need in order to automate a deployment to a testing server on a successful build? I'd like each commit to force a build and in term deploy to testing so I can see everything right away.

    Read the article

  • Do you like Google Gadgets? Check out this gadget for DotNetNuke Administration

    - by Brian Scarbeau
    I discovered this cool Google gadget over at STP Systems. The gadget once installed allows you to see information about your DotNetNuke Server directly in your iGoogle account. You can view information about all your portals as well.  Check out the YouTube video on the product. Here are some screen shots from STP Systems site that will get displayed as a gadget: Server Health Most Popular Pages User Activity Watchdog       Visitors The Installation is very easy. All you have to do is go to the site and download the module and then install on your DotNetNuke portal. Place the module on a test page. The Module generates an encrypted GUID which has to be copied and pasted into your Gadget in order to establish the connection. Note: Only DNN Super User account holder can access the installed module and generate the GUID. You need to Add the DotNetNuke Gadget to your iGoogle from the module setting. In iGoogle, go to the edit settings for the gadget and paste the GUID that you created from the module. Try it out! It’s a nice gadget to have. Technorati Tags: DotNetNuke,Googke,iGoogle,Module

    Read the article

  • Summit Time!

    - by Ajarn Mark Caldwell
    Boy, how time flies!  I can hardly believe that the 2011 PASS Summit is just one week away.  Maybe it snuck up on me because it’s a few weeks earlier than last year.  Whatever the cause, I am really looking forward to next week.  The PASS Summit is the largest SQL Server conference in the world and a fantastic networking opportunity thrown in for no additional charge.  Here are a few thoughts to help you maximize the week. Networking As Karen Lopez (blog | @DataChick) mentioned in her presentation for the Professional Development Virtual Chapter just a couple of weeks ago, “Don’t wait until you need a new job to start networking.”  You should always be working on your professional network.  Some people, especially technical-minded people, get confused by the term networking.  The first image that used to pop into my head was the image of some guy standing, awkwardly, off to the side of a cocktail party, trying to shmooze those around him.  That’s not what I’m talking about.  If you’re good at that sort of thing, and you can strike up a conversation with some stranger and learn all about them in 5 minutes, and walk away with your next business deal all but approved by the lawyers, then congratulations.  But if you’re not, and most of us are not, I have two suggestions for you.  First, register for Don Gabor’s 2-hour session on Tuesday at the Summit called Networking to Build Business Contacts.  Don is a master at small talk, and at teaching others, and in just those two short hours will help you with important tips about breaking the ice, remembering names, and smooth transitions into and out of conversations.  Then go put that great training to work right away at the Tuesday night Welcome Reception and meet some new people; which is really my second suggestion…just meet a few new people.  You see, “networking” is about meeting new people and being friendly without trying to “work it” to get something out of the relationship at this point.  In fact, Don will tell you that a better way to build the connection with someone is to look for some way that you can help them, not how they can help you. There are a ton of opportunities as long as you follow this one key point: Don’t stay in your hotel!  At the least, get out and go to the free events such as the Tuesday night Welcome Reception, the Wednesday night Exhibitor Reception, and the Thursday night Community Appreciation Party.  All three of these are perfect opportunities to meet other professionals with a similar job or interest as you, and you never know how that may help you out in the future.  Maybe you just meet someone to say HI to at breakfast the next day instead of eating alone.  Or maybe you cross paths several times throughout the Summit and compare notes on different sessions you attended.  And you just might make new friends that you look forward to seeing year after year at the Summit.  Who knows, it might even turn out that you have some specific experience that will help out that other person a few months’ from now when they run into the same challenge that you just overcame, or vice-versa.  But the point is, if you don’t get out and meet people, you’ll never have the chance for anything else to happen in the future. One more tip for shy attendees of the Summit…if you can’t bring yourself to strike up conversation with strangers at these events, then at the least, after you sit through a good session that helps you out, go up to the speaker and introduce yourself and thank them for taking the time and effort to put together their presentation.  Ideally, when you do this, tell them WHY it was beneficial to you (e.g. “Now I have a new idea of how to tackle a problem back at the office.”)  I know you think the speakers are all full of confidence and are always receiving a ton of accolades and applause, but you’re wrong.  Most of them will be very happy to hear first-hand that all the work they put into getting ready for their presentation is paying off for somebody. Training With over 170 technical sessions at the Summit, training is what it’s all about, and the training is fantastic!  Of course there are the big-name trainers like Paul Randall, Kimberly Tripp, Kalen Delaney, Itzik Ben-Gan and several others, but I am always impressed by the quality of the training put on by so many other “regular” members of the SQL Server community.  It is amazing how you don’t have to be a published author or otherwise recognized as an “expert” in an area in order to make a big impact on others just by sharing your personal experience and lessons learned.  I would rather hear the story of, and lessons learned from, “some guy or gal” who has actually been through an issue and came out the other side, than I would a trained professor who is speaking just from theory or an intellectual understanding of a topic. In addition to the three full days of regular sessions, there are also two days of pre-conference intensive training available.  There is an extra cost to this, but it is a fantastic opportunity.  Think about it…you’re already coming to this area for training, so why not extend your stay a little bit and get some in-depth training on a particular topic or two?  I did this for the first time last year.  I attended one day of extra training and it was well worth the time and money.  One of the best reasons for it is that I am extremely busy at home with my regular job and family, that it was hard to carve out the time to learn about the topic on my own.  It worked out so well last year that I am doubling up and doing two days or “pre-cons” this year. And then there are the DVDs.  I think these are another great option.  I used the online schedule builder to get ready and have an idea of which sessions I want to attend and when they are (much better than trying to figure this out at the last minute every day).  But the problem that I have run into (seems this happens every year) is that nearly every session block has two different sessions that I would like to attend.  And some of them have three!  ACK!  That won’t work!  What is a guy supposed to do?  Well, one option is to purchase the DVDs which are recordings of the audio and projected images from each session so you can continue to attend sessions long after the Summit is officially over.  Yes, many (possibly all) of these also get posted online and attendees can access those for no extra charge, but those are not necessarily all available as quickly as the DVD recording are, and the DVDs are often more convenient than downloading, especially if you want to share the training with someone who was not able to attend in person. Remember, I don’t make any money or get any other benefit if you buy the DVDs or from anything else that I have recommended here.  These are just my own thoughts, trying to help out based on my experiences from the 8 or so Summits I have attended.  There is nothing like the Summit.  It is an awesome experience, fantastic training, and a whole lot of fun which is just compounded if you’ll take advantage of the first part of this article and make some new friends along the way.

    Read the article

  • solr reverse proxy Apache2

    - by Steven
    I am trying to setup Apache2 as Reverse Proxy for solr. Apache and Solr are on the same machine. Apache is serving other stuff as regular web server,too. solsearch config file in /etc/apache2/config.d/ # Proxy specific settings ProxyRequests Off ProxyPreserveHost Off <Proxy *> AddDefaultCharset off Order deny,allow Allow from all </Proxy> ProxyPass /solrsearch http://localhost:8983/solr/collection1/browse ProxyPassReverse /solrsearch http://localhost:8983/solr/collection1/browse Now trying [http://localhost/solsearch] gives me the first page of [http://localhost:8983/solr/collection1/browse], but with broken layout (like css missing). Result: error.log of apache: File does not exist: /var/www/solr, referer: [http://192.168.1.150/solrsearch]

    Read the article

  • Best way to use GIT to maintain web application template

    - by Darren
    I am a sole developer and I have a web application template that I have created in Visual Studio. I am using GIT for source control, but only on my development machine. Presently I have a master and I create branches for new features, merging them back in to the master as I complete the features. I am at a point now where I am ready to use the template for deployments, and of course I want to continue adding new features via branching/merging. My question is: what would be the typical/recommended way for me to create application deployments based on the master? Should I clone the repository into a new directory that is for a particular web application? Or should I also use branching to do project development based on the main project? The projects would never be merged back into the master. However, it would be nice if I could merge future features into the master and have the ability to incorporate them into previously completed projects if desired. For more specific details of my environment: I am using TortoiseGIT in Windows 7, Visual Studio 2012, ASP.NET Web Pages. Obviously the main differences between deployments would simply be differing pages, CSS files and jQuery scripts. I found this post as I was writing this one. In order to do this should I clone the master repository and checkout from it?

    Read the article

  • script not run after reboot from /etc/rc3.d

    - by yael
    I create symbolic link to the file - /etc/rc3.d/platform.bash from /var/tmp/platform.bash ln -s /var/tmp/platform.bash /etc/rc3.d/platform.bash script exist under /var/tmp : -rwxr-xr-x 1 root root 58442 Aug 30 08:49 platform.bash view from /etc/rc3.d : lrwxrwxrwx 1 root root 31 Aug 30 06:33 S99platform.bash -> /var/tmp/platform.bash my target is to run platform.bash after reboot ( on solaris 10 OS ) from some reason the script platform.bash not run after reboot ? please advice what I need to check in order to verify the problem ? my script ( platform.bash ) #!/bin/bash echo test > /var/tmp/log.txt

    Read the article

  • Problems getting Squirrelmail and passenger working on apache

    - by Kenneth
    I'm trying to have a setup where I want to run a squirrelmail and Passenger on the same apache server, having a url point to squirrelmail and everything else handled by passenger. I've gotten so far that both squirrelmail and passenger will run fine by themselves but when passenger is running it handles all urls. So far I've tried using Alias and Redirect to point a webmail/ url to squirrelmails directory but that does not work. Here is my httpd.conf file: <VirtualHost *:80> ServerName not.my.real.server.name DocumentRoot /var/www/sinatra/public # Does not work: #Redirect webmail/ /usr/share/squirrelmail/ #<Directory /usr/share/squirrelmail> # Require all granted #</Directory> <Directory /var/www/sinatra/public> Order allow,deny Allow from all </Directory> </VirtualHost>

    Read the article

  • How do I boot into console mode (redux)

    - by Leo Simon
    I'm running Ubuntu 12.04. This question was asked some time ago How do I disable the boot splash screen? but the answers didn't work for me. The standard way to boot into console mode used to be to edit /etc/default/grub and set GRUB_CMDLINE_LINUX_DEFAULT="text" This worked fine until I ran the fix proposed in https://help.ubuntu.com/community/SoundTroubleshootingProcedure in order to get sound to work. Since then, I have disabled the boot-splash-screen, but I can avoid what I presume is the lightdm login prompt screen. All I want to do is disable this gui and be prompted with a console login prompt. (Shouldnt be so hard should it???) I read in three 33416 mentioned above that there was a bug in lightdm (it wasn't recognizing "text" properly as an option for GRUB_CMDLINE_LINUX_DEFAULT.) But this discussion happened more than a year ago, and it's surely been fixed. Yet my lightdm is uptodate (so I'm told when I try to update it with apt-get). As suggested in one of the above, I tried sudo update-rc.d -f lightdm remove which resulted in a hung machine. I managed to recover using recovery mode, but now I still get the gui again. Another suggestion is to edit /etc/init/lightdm.override. I've done this and set it to "manual" as suggested, but lightdm simply ignores this. Could somebody suggest how to proceed please? Thanks very much, Leo

    Read the article

  • What extra packages are needed by Amarok to transcode to MP3?

    - by Jon Pawley
    I'm using Amarok 2.6.0, on KDE version 4.9.3, on Kubuntu 12.04. I would like to be able to copy my music onto my MP3 player (in this case, my iPhone 3), but to transcode the tracks as I copy them over. However, when I right-click on the selected track, and choose "Copy to Collection" and select my iPhone, the option to transcode to MP3 is greyed out. What additional packages does Amarok need in order to enable the transcode to MP3 option? Thanks, Jon Oh, the "Amarok DIagnostics" output, from the Help menu gives: Amarok Diagnostics Amarok Version: 2.6.0 KDE Version: 4.9.3 Qt Version: 4.8.2 Phonon Version: 4.6.0 Phonon Backend: GStreamer (4.6.2) PulseAudio: Yes Amarok Scripts: Amarok Script Console 1.0 (stopped) Discogs 1.1b (stopped) Lyricwiki .2 (stopped) Free Music Charts 1.6.0 (stopped) Librivox.org 1.0 (stopped) Cool Streams 1.0 (stopped) BBC 1.1 (stopped) Amarok Plugins: AudioCd Collection (enabled) DAAP Collection (enabled) MTP Collection (enabled) MySQLServer Collection (enabled) MySQLe Collection (enabled) UPnP Collection (enabled) Universal Mass Storage Collection (enabled) iPod, iPad & iPhone Collection (enabled) Ampache (disabled) Jamendo (disabled) Last.fm (enabled) MP3 Music Store (disabled) MP3tunes (disabled) Magnatune Store (disabled) Podcast Directory (enabled) gpodder.net (enabled) Local Files & USB Mass Storage Backend (enabled) NFS Share Backend (enabled) SMB (Windows) Share Backend (enabled)

    Read the article

  • Best practice for assigning private IP ranges?

    - by Tauren
    Is it common practice to use certain private IP address ranges for certain purposes? I'm starting to look into setting up virtualization systems and storage servers. Each system has two NICs, one for public network access, and one for internal management and storage access. Is it common for businesses to use certain ranges for certain purposes? If so, what are these ranges and purposes? Or does everyone do it differently? I just don't want to do it completely differently from what is standard practice in order to simplify things for new hires, etc.

    Read the article

  • Wifi connected but no data transfer

    - by Anuj
    I have a Desktop which runs on Windows XP and a laptop which runs in Ubuntu. Recently I have set up a wireless router in order to be able to access internet on my laptop through wifi. The laptop connects to the wifi at ease, but is unable to transfer any data. Only when I switch on my laptop for the first time, it is able to transfer some data only for around 2 mins, after which it shows Destination Host unreachable on pinging the router, and everything stops working, but the wifi still shows to be connected. Please help!

    Read the article

  • postgres memory allocation tuning 2

    - by pstanton
    i've got a Ubuntu Linux system with 12Gb memory most of which (at least 10Gb) can be allocated solely to postgres. the system also has a 6 disk 15k SCSI RAID 10 setup. The process i'm trying to optimise is twofold. firstly a single threaded, single connection will do many inserts into 2-4 tables linked by foreign key. secondly many different complex queries are run against the resulting data, using group by extensively. this part especially needs to be optimised. i have four of these processes running at once in order to make use of the quad core CPU, therefore there will generally be no more than 5 concurrent connections (1 spare for admin tasks). what configuration changes to the default Postgres config would you recommend? I'm looking for the optimum values for things like work_mem, shared_buffers etc. relevant doco thanks!

    Read the article

  • How to ignore certain coding standard errors in PHP CodeSniffer

    - by Tom
    We have a PHP 5 web application and we're currently evaluating PHP CodeSniffer in order to decide whether forcing code standards improves code quality without causing too much of a headache. If it seems good we will add a SVN pre-commit hook to ensure all new files committed on the dev branch are free from coding standard smells. Is there a way to configure PHP codeSniffer to ignore a particular type of error? or get it to treat a certain error as a warning instead? Here an example to demonstrate the issue: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> </head> <body> <div> <?php echo getTabContent('Programming', 1, $numX, $numY); if (isset($msg)) { echo $msg; } ?> </div> </body> </html> And this is the output of PHP_CodeSniffer: > phpcs test.php -------------------------------------------------------------------------------- FOUND 2 ERROR(S) AND 1 WARNING(S) AFFECTING 3 LINE(S) -------------------------------------------------------------------------------- 1 | WARNING | Line exceeds 85 characters; contains 121 characters 9 | ERROR | Missing file doc comment 11 | ERROR | Line indented incorrectly; expected 0 spaces, found 4 -------------------------------------------------------------------------------- I have a issue with the "Line indented incorrectly" error. I guess it happens because I am mixing the PHP indentation with the HTML indentation. But this makes it more readable doesn't it? (taking into account that I don't have the resouces to move to a MVC framework right now). So I'd like to ignore it please.

    Read the article

  • How can I retrieve the details of the file from an outbound operation in BPEL 11g

    - by [email protected]
    Several times, we come across requirements where we need to capture the details of the file that got written out as a part of a BPEL process invoking a File/Ftp Adapter. Consider a case where we're using FileNamingConvention as "PurchaseOrder_%SEQ%.txt" and we need to do some post processing based on the filename (please remember that we wouldn't know the filename until the adapter invocation completes) In order to achieve this, we need to manually tweak the WSDL so that the File/Ftp Adapter can return the metadata of the file that was written out. In general, the File/Ftp Write/Put WSDL operations are one way as shown below:         The File/Ftp Adapters are designed to return the metadata back if this WSDL is tweaked into a two-way WSDL. In addition, the <wsdl:output/> must import the fileread.xsd schema (see below). You will need to copy fileread.xsd from  here into the xsd folder of your composite.       Finally, we will need to tweak the  WSDL. (highlighted below)           Finally, the BPEL <invoke> would look as shown below. Please note that the file metadata would be returned as a part of the BPEL output variable:

    Read the article

  • How to proxy to different named databases on the same server using MySQL Proxy?

    - by cclark
    I would like to have two databases on my MySQL server: DEV_DB_A DEV_DB_B However, in order to keep everyone's scripts, Query Browser settings and anything else from changing when we switch from using on DB to another I'd like to have everyone connect to DEV_DB and then use something like MySQL Proxy running a lua script which knows the currently active DB is DEV_DB_A and routes queries to there. If we restore a fresh version of the DB to DEV_DB_B or make some changes (e.g. partition a table) we can easily switch to DEV_DB_B by changing one Lua script instead of updating references everywhere. I had hoped I might be able to symlink inside of the mysql data directory but that didn't work so it seems like MySQL Proxy is a reasonable approach. Being new to Lua and MySQL Proxy I'm wondering if anyone else has approached the problem this way and how it worked.

    Read the article

  • ubuntu 12.10 installation failure

    - by Eidelmaim
    Here i am asking this question again because someone deemed it a duplicate to another topic which i over looked and NOTHING AT ALL in that topic pertained to my problem. If your going to close a topic believing it is a duplicate at least do somke reasearch into WHY you think its a duplicate and provide a like to a better source. How do i get past this installation username and password issue? I downloaded ubuntu 12.10 directly from Ubuntu.com and created a bootable USB with linuxlive. after loading the boot drive and ubuntu begins, it goes directly from the purple ubuntu startup screen directly to a black DOS like prompt asking for a ubuntu login. this is COMPLETLY before any installation -begins. i need some help with this. FYI : this is what it is saying after it goes to the login area in the DOS (Full black screen) like screen. Ubuntu 12.10 ubuntu tty1 ubuntu login: now i will provide a few images of the problem i am having. and because i DONT HAVE ANY OS on the computer BECAUSE ubuntu WONT go PAST this... i have to snap these pictuires with a cell phone and upload on another PC. these links are in chronological order from time of pressing power button to time i am presented with log in screen : image 1 image 2 can only post 2 links in messages... will post additional links in comments So, again... how do i get past this ? this is entireley before ubuntu is installed on my system. my PC specs... Homebuilt computer: Motherboard is a Asus Sabertooth x58 with a intel core i7 processor. Mushkin memory @ 12gbs. 4ea. Seagate 150gb hard drives. nvidia GTX 260 graphics card. i initially attempted to install to raid 5. failed. i broke down the raid and attempted to install to a single drive with all other drives disconnected from the PC. again, thanks in advanced for any assistance.

    Read the article

  • Minimize the chance my email is blocked/filtered as spam

    - by justSteve
    I'm running a web-based store where order confirmations are sometimes blocked and don't reach the intended user. The structure of the business model is such that our product is marketed to the end-user by a 3rd parities - affiliates how are known entities to the end-users and email is freely exchanged between end-users and our affiliates. Our confirmations being blocked is becoming a big enough problem that we are considering implementing a system where a 'confirmations' address is created within the affiliates domain, then we'd have our app send via the affiliate's mail server instead of our own. But that'd be lots of work. The idea has been raised to have our app use our affiliates' email in the FROM field but still send from our server. My thinking is that would be detected at the end-users side and blocked just as often - we dealing with institutions large enough at least some checks up at the perimeter. Is this assumption correct (more likely to be blocked) or is there a less round about way to send messages under the auspices of 3rd parties? thx

    Read the article

  • Does concurrency inherently introduce "randomness" into a game?

    - by Jeff
    When a game is implemented with concurrency (as most games are), does this necessarily, by its very nature, introduce an element of randomness into the game that is outside of the players' control? Note that when I use the word "random", I'm not meaning to launch into a philosophical debate about the deterministic nature of the system. I understand that concurrency is deterministic in the sense that the operating system decides which processes to allow time on the CPU and in what order (or the JVM controls which Thread's turn it is to execute, etc). But my understanding of this is that there is no way to control or predict whether one thread's next command will execute before or after another. The reason I'm asking is because this seems like a fundamental difficulty for game development where a game is supposedly designed around a player's skill. Consider a game like League of Legends. Assume that two players are battling it out. It's a very close contest between the two and it's coming down to the wire -- so much so that whoever gets their last attack off will be the one to kill the other and win the game for their team. If the players are implemented using concurrency and the situation really was like this, is it essentially out of the players' hands at this point? Is the outcome of this match all up to whatever system is arbitrarily deciding which player's thread/process will execute next? If not, what am I misunderstanding about concurrency? If so, is there any way around this problem so that a game of skill can always be a game of skill, especially in those most crucial moments?

    Read the article

  • Constructs for wrapping a hardware state machine

    - by Henry Gomersall
    I am using a piece of hardware with a well defined C API. The hardware is stateful, with the relevant API calls needing to be in the correct order for the hardware to work properly. The API calls themselves will always return, passing back a flag that advises whether the call was successful, or if not, why not. The hardware will not be left in some ill defined state. In effect, the API calls advise indirectly of the current state of the hardware if the state is not correct to perform a given operation. It seems to be a pretty common hardware API style. My question is this: Is there a well established design pattern for wrapping such a hardware state machine in a high level language, such that consistency is maintained? My development is in Python. I ideally wish the hardware state machine to be abstracted to a much simpler state machine and wrapped in an object that represents the hardware. I'm not sure what should happen if an attempt is made to create multiple objects representing the same piece of hardware. I apologies for the slight vagueness, I'm not very knowledgeable in this area and so am fishing for assistance of the description as well!

    Read the article

  • Google Analytics async=true seems wrong in the Google documentation?

    - by leeand00
    In the Google Analytics async example, they state that in order to include more than one tracker, you need to setup your pages for asyncrous tracking, and they do so using the following code: <script type="text/javascript"> _gaq.push( ['_setAccount', 'UA-XXXXX-1'], ['_trackPageview'], ['b._setAccount', 'UA-XXXXX-2'], ['b._trackPageview'] ); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> The second tracker is not receiving any results. After looking at my tracking codes to ensure they are correct, I noticed that the ga.async = true statement is specified differently most of the time and is never set to a value of true, it's often set to async but never true. Could this be stopping my Analytics data from posting to the second tracker? or might it be something else? Also what calls should I look for in the Net tab in firebug to ensure that GA is being called when the page loads?

    Read the article

  • Ubuntu installation does not recognize drive partitioning

    - by Woltan
    I have a 1TB drive and installed Windows 7 on a 128GB partition. When I now try to install Ubuntu 11.04 it does not recognize the Windows partition but offers the complete 1TB drive to install Ubuntu on instead. It displays: However, in the Ubuntu Disk Utility the Windows partitions are recognized. What do I need to do in order for Ubuntu to recognize the Windows 7 partition and install Ubuntu as a dual boot? Response to comments The following commands were executed and the results are shown below: fdisk -l WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x34a38165 Device Boot Start End Blocks Id System /dev/sda1 * 1 13 102400 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/sda2 13 16318 130969600 7 HPFS/NTFS Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x14a714a6 Device Boot Start End Blocks Id System /dev/sdb1 1 60801 488384001 83 Linux parted -l Warning: Unable to open /dev/sr0 read-write (Read-only file system). /dev/sr0 has been opened read-only. Error: /dev/sr0: unrecognised disk label

    Read the article

  • Why is Apple System Image Utility so slow?

    - by Jon Rhoades
    I'm using Apple System Image Utility (SIU) on Snow Leopard 10.6.2 and I am rather disturbed it takes over Three hours to make a Netrestore or Netboot image. I'm using as the donor machine a brand new iMac and as the imaging machine a brand new iMac connected using target disk mode & Firewire 800. The hard drive size and subsequent image is about 8GB. To restore the image over the network takes about 4 minutes. Given that Norton Ghost will take an image in about 5 minutes (or less on newer machines) over USB2, why is the Mac over an order of magnitude slower?

    Read the article

  • Why does 12.04 upgrade abort with out of space error when I have lots of it?

    - by Kristian Thomsen
    When upgrading Ubuntu from 11.10 to 12.04 I discovered an unexpected problem. The upgrade was stopped because there wasn't enough free space for the installation. I managed to free some space and do the upgrade but now a prompt appears after logging in saying I'm out of space. This prompt asks me if I want to examine the problem. The "Disk Usage Analyser" is opened. In the top it says: Total filesystem capacity: 47.0 GB (used: 13.5 GB available: 33.4 GB) Folder -- Usage -- Size / -- 100% -- 12.5 GB usr -- 44.8 % -- 5.6 GB home -- 30.3 % -- 3.8 GB lib -- 13.0 % -- 1.6 GB var -- 9.1 % -- 1.1 GB boot 2.5 % 309.5 GB and a lot of small contributors like: etc, opt, sbin, bin etc. I do not really understand this problem since the analyser in the top says that I have 33.4 GB left in this file system. What can I do to make Ubuntu use the remaining space? Running df -i in the terminal gives: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda7 610800 576874 33926 95% / udev 213451 563 212888 1% /dev tmpfs 218524 486 218038 1% /run none 218524 3 218521 1% /run/lock none 218524 7 218517 1% /run/shm /dev/sda8 2264752 16371 2248381 1% /home The output of df -h Filesystem Size Used Avail Use% Mounted on /dev/sda7 9,3G 7,8G 1,1G 88% / udev 993M 4,0K 993M 1% /dev tmpfs 401M 884K 400M 1% /run none 5,0M 0 5,0M 0% /run/lock none 1003M 152K 1002M 1% /run/shm /dev/sda8 35G 4,0G 29G 13% /home /dev/sda2 101G 64G 37G 64% /media/A2C8E28BC8E25CD3 Running sudo fdisk -l gives Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000080 Device Boot Start End Blocks Id System /dev/sda1 63 96389 48163+ de Dell Utility /dev/sda2 * 98304 210434488 105168092+ 7 HPFS/NTFS/exFAT /dev/sda3 210436094 312576704 51070305+ f W95 Ext'd (LBA) /dev/sda5 306279288 312576704 3148708+ dd Unknown /dev/sda6 210436096 214341631 1952768 82 Linux swap / Solaris /dev/sda7 214343680 233873407 9764864 83 Linux /dev/sda8 233875456 306278399 36201472 83 Linux Partition table entries are not in disk order

    Read the article

  • Reading a large SQL Errorlog

    - by steveh99999
    I came across an interesting situation recently where a SQL instance had been configured with the Audit of successful and failed logins being written to the errorlog. ie This meant… every time a user or the application connected to the SQL instance – an entry was written to the errorlog. This meant…  huge SQL Server errorlogs. Opening an errorlog in the usual way, using SQL management studio, was extremely slow… Luckily, I was able to use xp_readerrorlog to work around this – here’s some example queries..   To show errorlog entries from the currently active log, just for today :- DECLARE @now DATETIME DECLARE @midnight DATETIME SET @now = GETDATE() SET @midnight =  DATEADD(d, DATEDIFF(d, 0, getdate()), 0) EXEC xp_readerrorlog 0,1,NULL,NULL,@midnight,@now   To find out how big the current errorlog actually is, and what the earliest and most recent entries are in the errorlog :- CREATE TABLE #temp_errorlog (Logdate DATETIME, ProcessInfo VARCHAR(20),Text VARCHAR(4000)) INSERT INTO #temp_errorlog EXEC xp_readerrorlog 0 -- for current errorlog SELECT COUNT(*) AS 'Number of entries in errorlog', MIN(logdate) AS 'ErrorLog Starts', MAX(logdate) AS 'ErrorLog Ends' FROM #temp_errorlog DROP TABLE #temp_errorlog To show just DBCC history  information in the current errorlog :- EXEC xp_readerrorlog 0,1,'dbcc'   To show backup errorlog entries in the current errorlog :- CREATE TABLE #temp_errorlog (Logdate DATETIME, ProcessInfo VARCHAR(20),Text VARCHAR(4000)) INSERT INTO #temp_errorlog EXEC xp_readerrorlog 0 -- for current errorlog SELECT * from #temp_errorlog WHERE ProcessInfo = 'Backup' ORDER BY Logdate DROP TABLE #temp_errorlog XP_Errorlog is an undocumented system stored procedure – so no official Microsoft link describing the parameters it takes – however,  there’s a good blog on this here And, if you do have a problem with huge errorlogs – please consider running system stored procedure  sp_cycle_errorlog on a nightly or regular basis.  But if you do this,  remember to change the amount of errorlogs you do retain – the default of 6 might not be sufficient for you….

    Read the article

  • Podcast Show Notes: Architect Day Panel Highlights

    - by Bob Rhubart
    The 2010 series of Oracle Technology Network Architect Day events kicked off in May with events in Dallas, Texas, Redwood Shores, California, and Anaheim, California. The centerpiece of each Architect Day event is a panel discussion that brings together the day's various presenters along with experts drawn from the local Oracle community. This week’s ArchBeat program presents highlights from the panel discussion from the event held in Anaheim. Listen The voices you’ll hear in these highlights belong to (listed in order of appearance): Ralf Dossmann: Director of SOA and Middleware in Oracle’s Enterprise Solutions Group LinkedIn | Oracle Mix Floyd Teter: Innowave Technology, Oracle ACE Director Blog | Twitter | LinkedIn | Oracle Mix | Oracle ACE Profile Basheer Khan: Innowave Technology, Oracle ACE Director Blog | Twitter | LinkedIn | Oracle Mix | Oracle ACE Profile Jeff Savit:  Oracle virtualization expert, former Sun Microsystems principal engineer Blog | LinkedIn | Oracle Mix Geri Born: Oracle security analyst LinkedIn | A 10-minute podcast can't really do justice to the hour-long panel discussion at each Architect Day event, let alone the discussion that is characteristic of each session throughout each Architect Day. But at least you’ll get a taste of what you’ll find at the live events. You’ll find slide decks and more from this first series of 2010 events in the Architect Day Artifacts post on this blog. More dates/cities will be added soon to the Architect Day schedule.  Coming Soon Next week’s ArchBeat program kicks off a three-part series featuring Cameron Purdy,  Oracle ACE Director Aleksander Seovic, and Oracle ACE John Stouffer in a conversation about data grid technology and Oracle Coherence. Stay tuned: RSS Technorati Tags: oracle,oracle technology network,archbeat,arch2arch,podcast,architect day del.icio.us Tags: oracle,oracle technology network,archbeat,arch2arch,podcast,architect day

    Read the article

< Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >