Search Results

Search found 34207 results on 1369 pages for 'query output'.

Page 569/1369 | < Previous Page | 565 566 567 568 569 570 571 572 573 574 575 576  | Next Page >

  • how to execute for loop with sed in terminal

    - by vipin8169
    I want to execute the for loop with sed command, and is getting an error for the same for i in <comma-separated server name list>;do "command";echo $i;done where command=sed '/^$/d' /home/nextag/instance.properties|grep -vc '#' I'm getting the following error :- -bash: sed "/^$/d" /home/nextag/instance.properties|grep -vc#: No such file or directory lu1 What is the correct way to execute this command to get the perfect output I tried this as well for i in lu1;do 'sed \'/^$/d\' /home/nextag/instance.properties|grep -vc \'#\'';echo $i;done Also, can some explain the part '/^$/d'in sed '/^$/d' /home/nextag/instance.properties|grep -vc '#'

    Read the article

  • How granular should a command be in a CQ[R]S model?

    - by Aaronaught
    I'm considering a project to migrate part of our WCF-based SOA over to a service bus model (probably nServiceBus) and using some basic pub-sub to achieve Command-Query Separation. I'm not new to SOA, or even to service bus models, but I confess that until recently my concept of "separation" was limited to run-of-the-mill database mirroring and replication. Still, I'm attracted to the idea because it seems to provide all the benefits of an eventually-consistent system while sidestepping many of the obvious drawbacks (most notably the lack of proper transactional support). I've read a lot on the subject from Udi Dahan who is basically the guru on ESB architectures (at least in the Microsoft world), but one thing he says really puzzles me: As we get larger entities with more fields on them, we also get more actors working with those same entities, and the higher the likelihood that something will touch some attribute of them at any given time, increasing the number of concurrency conflicts. [...] A core element of CQRS is rethinking the design of the user interface to enable us to capture our users’ intent such that making a customer preferred is a different unit of work for the user than indicating that the customer has moved or that they’ve gotten married. Using an Excel-like UI for data changes doesn’t capture intent, as we saw above. -- Udi Dahan, Clarified CQRS From the perspective described in the quotation, it's hard to argue with that logic. But it seems to go against the grain with respect to SOAs. An SOA (and really services in general) are supposed to deal with coarse-grained messages so as to minimize network chatter - among many other benefits. I realize that network chatter is less of an issue when you've got highly-distributed systems with good message queuing and none of the baggage of RPC, but it doesn't seem wise to dismiss the issue entirely. Udi almost seems to be saying that every attribute change (i.e. field update) ought to be its own command, which is hard to imagine in the context of one user potentially updating hundreds or thousands of combined entities and attributes as it often is with a traditional web service. One batch update in SQL Server may take a fraction of a second given a good highly-parameterized query, table-valued parameter or bulk insert to a staging table; processing all of these updates one at a time is slow, slow, slow, and OLTP database hardware is the most expensive of all to scale up/out. Is there some way to reconcile these competing concerns? Am I thinking about it the wrong way? Does this problem have a well-known solution in the CQS/ESB world? If not, then how does one decide what the "right level" of granularity in a Command should be? Is there some "standard" one can use as a starting point - sort of like 3NF in databases - and only deviate when careful profiling suggests a potentially significant performance benefit? Or is this possibly one of those things that, despite several strong opinions being expressed by various experts, is really just a matter of opinion?

    Read the article

  • How can I get my ATI / AMD drivers to work with any kernel above 3.2.0.x?

    - by TorakTu
    How can I get my ATI / AMD drivers to work with any kernel above 3.2.0.x ? WHAT DID WORK Installed original AMD64 version of Ubuntu 12.04 ISO image. Burned DVD and installed which shown kernel 3.2.0-23 to begin with. Got 5.1 surround sound working. Got ATI ( Now its AMD ) video drivers installed for my Radeon HD R6870 Video card from AMD's website. fglrxinfo came up and reported as normal. THE PROBLEM Kernel 3.2.0.x kept locking up so I tried higher kernel versions. But ATI / AMD Drivers do not install on any kernel Above 3.2.0.x WHAT I HAVE TRIED I have gone over this tutorial many times ( https://help.ubuntu.com/community/BinaryDriverHowto/ATI ) and it doesn't work on ANY kernel except 3.2.0.x. The problems I am having here are that the ATI / AMD drivers working for the 12.04 Precise with kernel 3.2.0-23 and 24, But the computer kept locking up. Although all my games would work, the lock ups were random and were constant. So I looked all over the web for 3 days trying to find an answer and the lock up issue was said to just update the kernel. So I did. Have tried many kernels. All of them .. no lock ups. BUT the Restricted AMD drivers from the AMD website will not install. And none of the OpenSource AMD drivers have EVER installed no matter what Kernel or version I tried. EXAMPLE OUTPUT OF 3D TYPE OF ERRORS Javax.media.opengl.GLException: glXGetConfig failed: error code GLX_NO_EXTENSION at com.sun.opengl.impl.x11.X11GLDrawableFactory.glXGetConfig(X11GLDrawableFactory.java:651) at com.sun.opengl.impl.x11.X11GLDrawableFactory.xvi2GLCapabilities(X11GLDrawableFactory.java:350) at com.sun.opengl.impl.x11.X11GLDrawableFactory.chooseGraphicsConfiguration(X11GLDrawableFactory.java:174) at javax.media.opengl.GLCanvas.chooseGraphicsConfiguration(GLCanvas.java:520) at javax.media.opengl.GLCanvas.<init>(GLCanvas.java:131) at haven.HavenPanel.<init>(HavenPanel.java:68) at haven.HavenPanel.<init>(HavenPanel.java:78) at haven.MainFrame.<init>(MainFrame.java:182) at haven.MainFrame.main2(MainFrame.java:306) at haven.MainFrame.access$100(MainFrame.java:34) at haven.MainFrame$7.run(MainFrame.java:360) at java.lang.Thread.run(Thread.java:722) And of course this is what fglrxinfo shows : X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 139 (ATIFGLEXTENSION) Minor opcode of failed request: 66 () Serial number of failed request: 13 Current serial number in output stream: 13 EDIT : I forgot to mention that I DID look at this post over the last few days and it did not help.

    Read the article

  • Understanding exceptional cases

    - by Justin
    I've been studying the use of exceptions in various php projects (such as Doctrine and Zend Framework). Exceptions seem to be thrown when unordinary input/state occurs. A perfect example is Doctrine throwing an exception when you try to use a invalid query string. I think the creators of the doctrine api understood that first, you can't query data by using an invalid DQL statement, and a developer should immediately be warned that an error has occurred, rather then letting execution continue with the possibility of an error code going un-checked. I also bet that this simplifies reading the code. I can't think of a situation where you would want to use an invalid DQL statement, except unit testing. Since this is true, it's better to avoid plaguing a bunch of code with null/error checks and use exceptions. I've read in books that exceptions shouldn't be thrown when validating dating user input. I've seen examples where of where the guideline is broken. One example is the Zend framework. If supplying an invalid controller or action name, an exception is thrown. Unlike doctrine, the user has more direct control over this sort of input. I know you can configure an error controller and set up a 404 message or what have you, but I'm curious why they have used an exception in this scenario? I guess you can argue the Zend Framework does not know how to continue processing the quest. One last example Is I wrote a function to return some html based on a given resource type. This resource type is hard-coded and sent when a user interacts with a web site (such as clicking a button to display the form to input data). I don't expect users to be mucking around with the request type. Under normal operating conditions, the resource type should be valid. To clean up some logic, I was going to throw an exception if a particular form wasn't found. This is mainly to find the correct form associated with a resource type so proper validation can occur. Does this sound like a valid use case for an exception? Right now it's pretty trivial, but I do plan to implement a restful consumer and re-using a function to map resources to their validation services would be very useful. I can then catch the exception and based on the consumer, return an error message suitable for the request type...

    Read the article

  • SSIS - Range lookups

    - by Repieter
      When developing an ETL solution in SSIS we sometimes need to do range lookups in SSIS. Several solutions for this can be found on the internet, but now we have built another solution which I would like to share, since it's pretty easy to implement and the performance is fast.   You can download the sample package to see how it works. Make sure you have the AdventureWorks2008R2 and AdventureWorksDW2008R2 databases installed. (Apologies for the layout of this blog, I don't do this too often :))   To give a little bit more information about the example, this is basically what is does: we load a facttable and do an SCD type 2 lookup operation of the Product dimension. This is done with a script component.   First we query the Data warehouse to create the lookup dataset. The query that is used for that is:   SELECT     [ProductKey]     ,[ProductAlternateKey]     ,[StartDate]     ,ISNULL([EndDate], '9999-01-01') AS EndDate FROM [DimProduct]     The output of this query is stored in a DataTable:     string lookupQuery = @"                         SELECT                             [ProductKey]                             ,[ProductAlternateKey]                             ,[StartDate]                             ,ISNULL([EndDate], '9999-01-01') AS EndDate                         FROM [DimProduct]";           OleDbCommand oleDbCommand = new OleDbCommand(lookupQuery, _oleDbConnection);         OleDbDataAdapter adapter = new OleDbDataAdapter(oleDbCommand);           _dataTable = new DataTable();         adapter.Fill(_dataTable);     Now that the dimension data is stored in the DataTable we use the following method to do the actual lookup:   public int RangeLookup(string businessKey, DateTime lookupDate)     {         // set default return value (Unknown)         int result = -1;           DataRow[] filteredRows;         filteredRows = _dataTable.Select(string.Format("ProductAlternateKey = '{0}'", businessKey));           for (int i = 0; i < filteredRows.Length; i++)         {             // check if the lookupdate is found between the startdate and enddate of any of the records             if (lookupDate >= (DateTime)filteredRows[i][2] && lookupDate < (DateTime)filteredRows[i][3])             {                 result = (filteredRows[i][0] == null) ? -1 : (int)filteredRows[i][0];                 break;             }         }           filteredRows = null;           return result;     }       This method is executed for every row that passes the script component. This is implemented in the ProcessInputRow method   public override void Input0_ProcessInputRow(Input0Buffer Row)     {         // Perform the lookup operation on the current row and put the value in the Surrogate Key Attribute         Row.ProductKey = RangeLookup(Row.ProductNumber, Row.OrderDate);     }   Now what actually happens?!   1. Every record passes the business key and the orderdate to the RangeLookup method. 2. The DataTable is then filtered on the business key of the current record. The output is stored in a DataRow [] object. 3. We loop over the DataRow[] object to see where the orderdate meets the following expression: (lookupDate >= (DateTime)filteredRows[i][2] && lookupDate < (DateTime)filteredRows[i][3]) 4. When the expression returns true (so where the data is between the Startdate and the EndDate), the surrogate key of the dimension record is returned   We have done some testing with this solution and it works great for us. Hope others can use this example to do their range lookups.

    Read the article

  • Celko's SQL Stumper: Eggs in one Basket

    Joe Celko returns with another stumper to celebrate Easter. Unsurprisingly, this involves eggs. More surprising is the nature of the puzzle: This time, the puzzle is one of designing a database rather than a query. DDL as well as the DML.

    Read the article

  • Delay command execution over sockets

    - by David
    I've been trying to fix the game loop in a real time (tick delay) MUD. I realized using Thread.Sleep would seem clunky when the user spammed commands through their choice of client (Zmud, etc) e.g. east;south;southwest would wait three move ticks and then output everything from the past couple rooms. The game loop basically calls a Flush and Fill method for each socket during each tick (50ms) private void DoLoop() { Stopwatch stopWatch = new Stopwatch(); stopWatch.Start(); while (running) { // for each socket, flush and fill ConnectionMonitor.Update(); stopWatch.Stop(); WaitIfNeeded(stopWatch.ElapsedMilliseconds); stopWatch.Reset(); } } The Fill method fires the command events, but as mentioned before, they currently block using Thread.Sleep. I tried adding a "ready" flag to the state object that attempts to execute the command along with a queue of spammed commands, but it ends up executing one command and queuing up the rest i.e. each subsequent command executes something that got queued up that should've been executed before. I must be missing something about the timer. private readonly Queue<SpammedCommand> queuedCommands = new Queue<SpammedCommand>(); private bool ready = true; private void TryExecuteCommand(string input) { var commandContext = CommandContext.Create(input); var player = Server.Current.Database.Get<Player>(Session.Player.Key); var commandInfo = Server.Current.CommandLookup .FindCommand(commandContext.CommandName, player.IsAdmin); if (commandInfo != null) { if (!ready) { // queue command queuedCommands.Enqueue(new SpammedCommand() { Context = commandContext, Info = commandInfo }); return; } if (queuedCommands.Count > 0) { // queue the incoming command queuedCommands.Enqueue(new SpammedCommand() { Context = commandContext, Info = commandInfo, }); // dequeue and execute var command = queuedCommands.Dequeue(); command.Info.Command.Execute(Session, command.Context); setTimeout(command.Info.TickLength); return; } commandInfo.Command.Execute(Session, commandContext); setTimeout(commandInfo.TickLength); } else { Session.WriteLine("Command not recognized"); } } Finally, setTimeout was supposed to set the execution delay (TickLength) for that command, and makeReady just sets the ready flag on the state object to true. private void setTimeout(TickDelay tickDelay) { ready = false; var t = new System.Timers.Timer() { Interval = (long) tickDelay, AutoReset = false, }; t.Elapsed += makeReady; t.Start(); // fire this in tickDelay ms } // MAKE READYYYYY!!!! private void makeReady(object sender, System.Timers.ElapsedEventArgs e) { ready = true; } Am I missing something about the System.Timers.Timer created in setTimeout? How can I execute (and output) spammed commands per TickLength without using Thread.Sleep?

    Read the article

  • Using the FormView Web Control in ASP.NET 3.5

    A FormView web control works much like a DetailsView web control it will display one record at a time to the browser from the database. The difference is that FormView is a template-based layout for which a developer can make detailed changes that affect the final output when rendered in the browser. This tutorial will explain how it works and walk you through setting up a FormView web control.... Test Drive the Next Wave of Productivity Find Microsoft Office 2010 and SharePoint 2010 trials, demos, videos, and more.

    Read the article

  • Can't Run Assault Cube

    - by Debashis Pradhan
    I installed assault cube from the Software centre and it just opens for half a second and closes. When i run in it from the terminal, this is what i get - d@d-platform:~$ assaultcube Using home directory: /home/d/.assaultcube_v1.104 current locale: en_IN init: sdl init: net init: world init: video: sdl init: video: mode X Error of failed request: BadValue (integer parameter out of range for operation) Major opcode of failed request: 129 (XFree86-VidModeExtension) Minor opcode of failed request: 10 (XF86VidModeSwitchToMode) Value in failed request: 0xb3 Serial number of failed request: 131 Current serial number in output stream: 133

    Read the article

  • Running a program on boot without login, using the screen

    - by configurator
    Preface: I have a server running on an old laptop. The screen is always on with a login prompt, but because its keyboard is in pretty bad shape, I use it exclusively via ssh. The screen is in a good position, though; I want to use it to display a clock and some stats about what my server is doing. I have scripts to display all those things, but I want to always show them on the monitor screen. My question is, how do I get my script (called HUD) to run on /dev/tty1, instead of the login prompt. Hopefully, it should be possible to accept keyboard input as well as display its output, so that it can use the keyboard to show more info where needed in a future version. I'd also like tty2 etc. to remain active as login screens, in face I actually do need to login locally. For a start, I tried creating a script that I can run from ssh to start the HUD. It goes something like this: ( flock -n 9 watch --interval 0.2 --precise --color --notitle --exec /path/to/script & disown ) 9> /var/lock/hud > /dev/tty1 2> /dev/tty1 < /dev/tty1 (I had to use & disown instead of nohup because nohup recognized the tty and redirects output to nohup.out instead.) This sort-of works. However, it has a few issues: It doesn't steal the terminal's keyboard input, so you can't ctrl+c to get out of it (nor change the script to actually use the keyboard input), and if you press enter it show it and scrolls the display, never refreshing it correctly afterwards. Oddly, if I disconnect the ssh session which created it, it stops working and shows a message: exec: No such file or directory. If I reconnect to ssh, it resumes functioning properly. It feels hackish. Is there a better way to do this? How?

    Read the article

  • How to Send the Contents of the Clipboard to a Text File via the Send to Menu

    - by Jason Faulkner
    We have previously covered how to send the contents of a text file to the Windows Clipboard with a simple Send To shortcut, but what if you want to do the opposite? That is: send the contents of the clipboard to a text file with a simple shortcut. No problem. Here’s how. Copy the ClipOut Utility While Windows offers the command line tool ‘clip’ as a way to direct console output to the clipboard, it does not have a tool to direct the clipboard contents to the console. To do this, we are going to use a small utility named ClipOut (download link at the bottom). Simply download and extract this file to a location in your Windows PATH variable (if you don’t know what this means, just extract the EXE to your C:\Windows folder) and you are ready to go. Add the Send To Shortcut Open your Send To folder location by going to Run > shell:sendto Create a new shortcut with the command: CMD /C ClipOut > Note the above command will overwrite the contents of the selected file. If you would like to append to the contents of the selected file, use this command instead: CMD /C ClipOut >> Of course, you could make shortcuts for both. Give a descriptive name to the shortcut. You’re finished. Using this shortcut will now send the text contents copied to your Windows Clipboard to the selected file. It is important to note that the ClipOut tool only supports outputting text. If you had binary data copied to your clipboard, then the output would be empty. Changing the Icon By default, the icon for the shortcut will appear as a command prompt, but you can easily change this by editing the properties of the shortcut and clicking the Change Icon button. We used an icon located in “%SystemRoot%\System32\shell32.dll”, but any icon of your liking will do. As an additional tweak, you can set the properties of the shortcut to run minimized. This will prevent the command window from “blinking” when the send to command is run (instead it will blink in your taskbar, which is hardly noticeable). Links Download ClipOut Utility     

    Read the article

  • OpenGL and switchable graphic cards

    - by Orcun
    I use a laptop and this laptop has readon AMD Radeon HD 6470M and onboad graphic card. When I run fglrxinfo, I get this error: X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 136 (GLX) Minor opcode of failed request: 19 (X_GLXQueryServerString) Serial number of failed request: 12 Current serial number in output stream: 12 Is it a problem ? Because of I reason I can't use opengl. Because, I can't run any opengl applications.

    Read the article

  • How to enable desktop effects on Ubuntu 10.04 after upgrade from Ubuntu 8.04?

    - by Manohar Bhattarai
    I upgraded my Ubuntu 8.04 to Ubuntu 10.04. When I try to enable desktop effects it says "Desktop effects could not be enabled". The output of "lspci | grep VGA" is : 00:02.0 VGA compatible controller: Intel Corporation 82845G/GL[Brookdale-G]/GE Chipset Integrated Graphics Device (rev 03) Hardware drivers says there is no propriority hardware driver. I installed nVidia driver but I think my is an Intel graphics device. Please help.

    Read the article

  • Where to store users consent (EU cookie law)

    - by Mantorok
    We are legally obliged in a few months to obtain consent from users to allow us to store any cookies on the users PC. My query is, what would be the most effective way of storing this consent to ensure that users don't get repeat requests to give consent in the future, obviously for authenticated users I can store this against their profile. But what about for non-authenticated users. My initial thought, ironically, was to store given consent in a cookie..?

    Read the article

  • What does this rule mean

    - by Kenyana
    When I run $ sudo iptables -L This is what I get Chain INPUT (policy ACCEPT) target prot opt source destination REJECT tcp -- anywhere anywhere tcp dpt:www flags:FIN,SYN,RST,ACK/SYN #conn/32 > 20 reject-with tcp-reset Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination What does this mean? I am pretty new to the whole Ubuntu world. I cannot access webmin at times, keep getting The connection has timed out error.

    Read the article

  • Chrome install failed on ubuntu 12.04

    - by Nathan
    I have tried sudo apt-get install -f and then sudo apt-get update but I still have the same dependency problems: dpkg: dependency problems prevent configuration of google-chrome-stable:i386: google-chrome-stable:i386 depends on xdg-utils And idea how to fix it? BTW, when I use sudo apt-get install -f, I got an output: After this operation, 119 MB disk space will be freed. Do you want to continue [Y/n]? y which seems to remove the files.

    Read the article

  • Using irc in NetBeans IDE 7.2

    - by Geertjan
    Turns out to be easy to use irc in NetBeans IDE 7.2. Install Irssi (I was able to do apt-get to install it), which has a handy guide here, and then use the Terminal window in NetBeans IDE (Window | Output | Terminal): In the above, do this: irssi /connect irc.freenode.net /join #netbeans Then, next time you have a problem in NetBeans IDE or there's some question you have about how to do something, just type your question in the Terminal window and someone will help you, if someone is there who knows the answer.

    Read the article

  • Partial Shader Signatures HLSL D3D11 C++

    - by ThePhD
    I had been debugging a problem I was having in a single shader file with 2 functions in it. I'm using DirectX 11, vs_5_0 and ps_5_0. I have stripped it down to its basic components to understand what was going wrong with the shaders, because the different named components of the Pixel and Vertex shaders were swapping the data being input: void QuadVertex ( inout float4 position : SV_Position, inout float4 color : COLOR0, inout float2 tex : TEXCOORD0 ) { // ViewProject is a 4x4 matrix, // just included here to show the simple passthrough of the data position = mul(position, ViewProjection); } And a Pixel Shader: float4 QuadPixel ( float4 color : COLOR0, float2 tex : TEXCOORD0 ) : SV_Target0 { // Color is filled with position data and tex is // filled with color values from the Vertex Shader return color; } The ID3D11InputLayout and associated C++ code correctly compiles the shaders and sets them up with some simple primitive data: data[0].Position.x = 0.0f * 210; data[0].Position.y = 1.0f * 160; data[0].Position.z = 0.0f; data[1].Position.x = 0.0f * 210; data[1].Position.y = 0.0f * 160; data[1].Position.z = 0.0f; data[2].Position.x = 1.0f * 210; data[2].Position.y = 1.0f * 160; data[2].Position.z = 0.0f; data[0].Colour = Colors::Red; data[1].Colour = Colors::Red; data[2].Colour = Colors::Red; data[0].Texture = Vector2::Zero; data[1].Texture = Vector2::Zero; data[2].Texture = Vector2::Zero; When used with the shader, the float4 color always ended up with the position data, and the float2 tex always ended up with the color data. After a moment, I figured out that the shader's input and output signatures needed to be in the correct order and the correct format and be laid out in the exact order of the output from the Vertex Shader, regardless of the semantics: float4 QuadPixel ( float4 pos : SV_Position, float4 color : COLOR0, float2 tex : TEXCOORD0 ) : SV_Target0 { return color; } After finding this out, My question is: Why don't the semantics map the appropriate components when going from Vertex Shader to Pixel Shader? Is there any way that I can make it so certain semantics are always mapped to other semantics, or do I always have to follow the rigid Shader Signature (in this case, Position, Color, and Texture) ? As a side note for why I'm asking: I know that when using XNA, my shader signatures for functions could differ in position and even drop items from Vertex Shader to Pixel Shader function parameters, having only the COLOR0 and TEXCOORD0 components being used (and it would still match up correctly). However, I also know that XNA relied on DX9 (and maybe a little DX10) implementation, and that maybe this kind of flexibility no longer exists in DX11?

    Read the article

  • Connect Digest : 2012-07-06

    - by AaronBertrand
    I've filed a few Connect items recently that I think are important. In #752210 , I complain that the documentation for DDL triggers suggests that they can prevent certain DDL from being run, which is not the case at all. http://connect.microsoft.com/SQLServer/feedback/details/752210/doc-ddl-trigger-topic-suggests-that-rollbacks-run-before-action In #745796 , I complain that scripting datetime data in Management Studio yields output that contains a binary representation instead of a human-readable...(read more)

    Read the article

  • Ubuntu doesn't "see" external USB Hard Disk

    - by Mina Michael
    It's NTFS. It's USB2. I'm using Ubuntu 13.04. It works perfectly fine on Windows (which excludes cable and hardware problems). I have two Ubuntu computers and it's not detected on either. It's about 500 GB. Edits: Following the first link, I input sudo lsusb in a terminal; before and after connecting the HDD. The difference was Bus 001 Device 012: ID 14cd:6116 Super Top M6116 SATA Bridge. There it is! ("sata bridge" used to appear in a windows notification when I plugged in the HDD in!). ...This means that Ubuntu detects it but is it not mounting it? I tried this: sudo mount /dev/sdb1 /mnt But gives this: mount: special device /dev/sdb1 does not exist I also tried: sudo mount /dev/sdc1 /mnt but it stays with no output forever. I left it in background for about 30 min.s. sudo fdisk -l gives out this: Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xa42d04a3 Device Boot Start End Blocks Id System /dev/sda1 63 80324 40131 de Dell Utility /dev/sda2 * 80325 102481919 51200797+ 7 HPFS/NTFS/exFAT /dev/sda3 263874558 312580095 24352769 5 Extended /dev/sda4 102481920 263872511 80695296 7 HPFS/NTFS/exFAT /dev/sda5 263874560 310505471 23315456 83 Linux /dev/sda6 310507520 312580095 1036288 82 Linux swap / Solaris Partition table entries are not in disk order Disk /dev/sdc: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x5822aaea Device Boot Start End Blocks Id System /dev/sdc1 2048 976769023 488383488 7 HPFS/NTFS/exFAT The part below "Partition table entries are not in disk order" takes about 5 minutes to appear. The outputs of ls /dev/ | grep sd before and after connecting the HDD: before: sda sda1 sda2 sda3 sda4 sda5 sda6 ,after: sda sda1 sda2 sda3 sda4 sda5 sda6 sdd sdd1 The second output has the lines sdd and sdd1 different from the first one. IT SHOWED THE FILES!! The command sudo mount /dev/sdd1 /mnt worked after I typed in sudo fdisk -l!!! Thanks a million!! :) :)

    Read the article

  • Prevent Truncation of Dynamically Generated Results in SQL Server Management Studio

    While working with the Results to Text option in SSMS, you may come across a situation where the output from dynamically generated data is truncated. In this article I will guide you on how to fix this issue and print all the text for the Results to Text option. "SQL Backup Pro 7 improves on an already wonderful product" - Don KolendaHave you tried version 7 yet? Get faster, smaller, fully verified backups. Download a free trial of SQL Backup Pro 7.

    Read the article

  • Graphical disk surface check tool?

    - by sbergeron
    I need a program that can scan my hard drive for read and write errors so I can partition around them. I REALLY don't do well with numbers but if I can have something that shows an output like the graphical display on gparted that would be perfect. I know a lot of people would recommend replacing the disk but right now I can't as I NEED this laptop for school and can't wait for a hard drive to arrive (I have ordered one, yes, but I don't expect it to arrive for another couple weeks as I only figured out afterwards they still have to manufacture it)

    Read the article

  • Serious problem with my sound system, No Hardware detected Suddenly, Please Help

    - by Aravind
    I'm Quite new to this Ubuntu but recently started using it, 2 days back I had problem with muting the Laptop speaker when headphone jack is plugged, to resove this i searched and somehow tried with the Alsa mixer and got it perfect. FYI this is the output of the Alsa script: [http://www.alsa-project.org/db/?f=9ec8099800aca2cb74ee35c2bf58125e45ca9f43][1] But since today morning there is no sound and no sound hardware are detected!! it looks like below http://i.imgur.com/gnK8R.png http://i.imgur.com/nxgvU.png Please help, - regards Aravind

    Read the article

  • High CPU load for 1:30 minutes when mounting ext4-raid partition

    - by sirion
    I have a raid 5 (software) with 5x2TB drives. I encrypted the raid with cryptsetup and put an ext4-partition on top. In the beginning opening and mounting the raid took less than 10 seconds, now (for a few weeks) mounting alone takes 1:30 minutes and the cpu stays around 93% the whole time: The output of "time sudo mount /dev/mapper/8000 /media/8000" is: real 1m31.952s user 0m0.008s sys 1m25.229s At the same time only one line is added to /var/log/syslog: kernel: [ 2240.921381] EXT4-fs (dm-1): mounted filesystem with ordered data mode. Opts: (null) My Ubuntu-version is "12.04.1 LTS" and no updates are pending. I checked the partition with fsck, but it says that all is ok. The "cryptsetup luksOpen" command only takes a few seconds. I also tried changing the raid-bitmap (as it was suggested in some forum) but it did not change the behaviour. sudo mdadm --grow /dev/md0 -b internal and sudo mdadm --grow /dev/md0 -b none I had the idea that it might be the hardware being slow, but a read test with "sudo hdparm -t /dev/md0" spit out values between 62 and 159 MB/sec: Timing buffered disk reads: 382 MB in 3.00 seconds = 127.14 MB/sec Timing buffered disk reads: 482 MB in 3.02 seconds = 159.62 MB/sec Timing buffered disk reads: 190 MB in 3.03 seconds = 62.65 MB/sec Timing buffered disk reads: 474 MB in 3.02 seconds = 157.12 MB/sec Although I think it is strange that the read rate jumps by more than 100% - could that mean something? The speed test when reading from the mapped (decrypted) device shows similar behavior, although it is of course much slower. "sudo hdparm -t /dev/mapper/8000": Timing buffered disk reads: 56 MB in 3.02 seconds = 18.54 MB/sec Timing buffered disk reads: 122 MB in 3.09 seconds = 39.43 MB/sec Timing buffered disk reads: 134 MB in 3.02 seconds = 44.35 MB/sec The output of a verbose mount "mount -vvv /dev/mapper/8000 /media/8000" does not help much: mount: fstab path: "/etc/fstab" mount: mtab path: "/etc/mtab" mount: lock path: "/etc/mtab~" mount: temp path: "/etc/mtab.tmp" mount: UID: 0 mount: eUID: 0 mount: spec: "/dev/mapper/8000" mount: node: "/media/8000" mount: types: "(null)" mount: opts: "(null)" mount: you didn't specify a filesystem type for /dev/mapper/8000 I will try type ext4 mount: mount(2) syscall: source: "/dev/mapper/8000", target: "/media/8000", filesystemtype: "ext4", mountflags: -1058209792, data: (null) Any idea where I could find additional information on why mounting takes so long, or what additional tests I could run?

    Read the article

  • Configuring trace file size and number in WebCenter Content 11g

    - by Kyle Hatlestad
    Lately I've been doing a lot of debugging using the System Output tracing in WebCenter Content 11g.  This is built-in tracing in the content server which provides a great level of detail on what's happening under the hood.  You can access the settings as well as a view of the tracing by going to Administration -> System Audit Information.  From here, you can select the tracing sections to include.  Some of my personal favorites are searchquery,  systemdatabase, userstorage, and indexer.  Usually I'm trying to find out some information regarding a search, database query, or user information.  Besides debugging, it's also very helpful for performance tuning. One of the nice tricks with the tracing is it honors the wildcard (*) character.  So you can put in 'schema*' and gather all of the schema related tracing.  And you can notice if you select 'all' and update, it changes to just a *.   To view the tracing in real-time, you simply go to the 'View Server Output' page and the latest tracing information will be at the bottom. This works well if you're looking at something pretty discrete and the system isn't getting much activity.  But if you've got a lot of tracing going on, it would be better to go after the trace log file itself.  By default, the log files can be found in the <content server instance directory>/data/trace directory. You'll see it named 'idccs_<managed server name>_current.log.  You may also find previous trace logs that have rolled over.  In this case they will identified by a date/time stamp in the name.  By default, the server will rotate the logs after they reach 1MB in size.  And it will keep the most recent 10 logs before they roll off and get deleted.  If your server is in a cluster, then the trace file should be configured to be local to the node per the recommended configuration settings. If you're doing some extensive tracing and need to capture all of the information, there are a couple of configuration flags you can set to control the logs. #Change log size to 10MB and number of logs to 20FileSizeLimit=10485760FileCountLimit=20 This is set by going to Admin Server -> General Configuration and entering them in the Additional Configuration Variables: section.  Restart the server and it should take on the new logging settings. 

    Read the article

< Previous Page | 565 566 567 568 569 570 571 572 573 574 575 576  | Next Page >