Search Results

Search found 29101 results on 1165 pages for 'thread local storage'.

Page 111/1165 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • C# Process Binary File, Multi-Thread Processing

    - by washtik
    I have the following code that processes a binary file. I want to split the processing workload by using threads and assigning each line of the binary file to threads in the ThreadPool. Processing time for each line is only small but when dealing with files that might contain hundreds of lines, it makes sense to split the workload. My question is regarding the BinaryReader and thread safety. First of all, is what I am doing below acceptable. I have a feeling it would be better to pass only the binary for each line to the PROCESS_Binary_Return_lineData method. Please note the code below is conceptual. I looking for a but of guidance on this as my knowledge of multi-threading is in its infancy. Perhaps there is a better way to achieve the same result, i.e. split processing of each binary line. var dic = new Dictionary<DateTime, Data>(); var resetEvent = new ManualResetEvent(false); using (var b = new BinaryReader(File.Open(Constants.dataFile, FileMode.Open, FileAccess.Read, FileShare.Read))) { var lByte = b.BaseStream.Length; var toProcess = 0; while (lByte >= DATALENGTH) { b.BaseStream.Position = lByte; lByte = lByte - AB_DATALENGTH; ThreadPool.QueueUserWorkItem(delegate { Interlocked.Increment(ref toProcess); var lineData = PROCESS_Binary_Return_lineData(b); lock(dic) { if (!dic.ContainsKey(lineData.DateTime)) { dic.Add(lineData.DateTime, lineData); } } if (Interlocked.Decrement(ref toProcess) == 0) resetEvent.Set(); }, null); } } resetEvent.WaitOne();

    Read the article

  • Thread-safe data structure design

    - by Inso Reiges
    Hello, I have to design a data structure that is to be used in a multi-threaded environment. The basic API is simple: insert element, remove element, retrieve element, check that element exists. The structure's implementation uses implicit locking to guarantee the atomicity of a single API call. After i implemented this it became apparent, that what i really need is atomicity across several API calls. For example if a caller needs to check the existence of an element before trying to insert it he can't do that atomically even if each single API call is atomic: if(!data_structure.exists(element)) { data_structure.insert(element); } The example is somewhat awkward, but the basic point is that we can't trust the result of "exists" call anymore after we return from atomic context (the generated assembly clearly shows a minor chance of context switch between the two calls). What i currently have in mind to solve this is exposing the lock through the data structure's public API. This way clients will have to explicitly lock things, but at least they won't have to create their own locks. Is there a better commonly-known solution to these kinds of problems? And as long as we're at it, can you advise some good literature on thread-safe design? EDIT: I have a better example. Suppose that element retrieval returns either a reference or a pointer to the stored element and not it's copy. How can a caller be protected to safely use this pointer\reference after the call returns? If you think that not returning copies is a problem, then think about deep copies, i.e. objects that should also copy another objects they point to internally. Thank you.

    Read the article

  • Two column layout, navigation div on the right, solution from previous thread didn't seem to work

    - by Tom
    I tried the solution from this thread, but I must be missing something because it doesn't work: <div style="float:left;margin-right:200px"> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p> </div> <div style="float:right;width:200px"> <p>navigation</p> </div> It works when the text in the content div (the left one) is short, but when it's long then the div takes up the whole width of the browser and the margin is there, but the right div is pushed below the first one nevertheless. What am I missing? Edit: The goal is to have a fix sized navigation column on the right of the browser window and the left div should get all the space left by the right navigation column (liquid layout).

    Read the article

  • Reload images in a UIWebView after they have been downloaded by a background thread

    - by dantastic
    I have an application that frequently checks in with a server and downloads a batch of articles to the iphone. The articles are in html and just stored using core data. An article has 0-n images on the page. Downloading all associated images at the same time as the text will be too slow and take too much bandwidth. Users are not likely to open every article. If they open an article once it is likely they will open it several times. So I want to download and store the images locally when they are needed. These articles are listed in a UITableView. When you tap an article you pop open a UIWebView that displays the article. I have a function that checks if I have downloaded the images associated with the article already. If I have I just pop open the the UIWebView - everything works fine. If I don't have the images downloaded I go off and download them and store them to my Documents directory. Although this i working, the app is hanging while the images are downloading. Not very tidy. I want the article to open in a snap and download the images with the article open. So what I've done is I check if the images are downloaded, if they aren't I go ahead and just "touch" the files I need and load the webview. The UIWebView opens up but the images referenced contain no data. Then in a background thread I download the images and overwrite the "dummy" ones. This will save the images and everything but it won't reload the images in my current UIWebView. I have to go back out of the article back back in again to see the images. Are there any ways around this? reloading just an image in a UIWebView?

    Read the article

  • How do I get events to execute on the "main thread"

    - by William DiStefano
    I have 2 classes, one is frmMain a windows form and the other is a class in vb.NET 2003. frmMain contains a start button which executes the monitor function in the other class. My question is, I am manually adding the event handlers -- when the events are executed how do I get them to excute on the "main thread". Because when the tooltip balloon pops up at the tray icon it displays a second tray icon instead of popping up at the existing tray icon. I can confirm that this is because the events are firing on new threads because if I try to display a balloon tooltip from frmMain it will display on the existing tray icon. Here is a part of the second class (not the entire thing): Friend Class monitorFolders Private _watchFolder As System.IO.FileSystemWatcher = New System.IO.FileSystemWatcher Private _eType As evtType Private Enum evtType changed created deleted renamed End Enum Friend Sub monitor(ByVal path As String) _watchFolder.Path = path 'Add a list of Filter to specify _watchFolder.NotifyFilter = IO.NotifyFilters.DirectoryName _watchFolder.NotifyFilter = _watchFolder.NotifyFilter Or IO.NotifyFilters.FileName _watchFolder.NotifyFilter = _watchFolder.NotifyFilter Or IO.NotifyFilters.Attributes 'Add event handlers for each type of event that can occur AddHandler _watchFolder.Changed, AddressOf change AddHandler _watchFolder.Created, AddressOf change AddHandler _watchFolder.Deleted, AddressOf change AddHandler _watchFolder.Renamed, AddressOf Rename 'Start watching for events _watchFolder.EnableRaisingEvents = True End Sub Private Sub change(ByVal source As Object, ByVal e As System.IO.FileSystemEventArgs) If e.ChangeType = IO.WatcherChangeTypes.Changed Then _eType = evtType.changed notification() End If If e.ChangeType = IO.WatcherChangeTypes.Created Then _eType = evtType.created notification() End If If e.ChangeType = IO.WatcherChangeTypes.Deleted Then _eType = evtType.deleted notification() End If End Sub Private Sub notification() _mainForm.NotifyIcon1.ShowBalloonTip(500, "Folder Monitor", "A file has been " + [Enum].GetName(GetType(evtType), _eType), ToolTipIcon.Info) End Sub End Class

    Read the article

  • Calling into a saved java object via JNI from a different thread

    - by Drake Amara
    I have a java object which calls into a C++ shared object via JNI. In C++, I am saving a reference to the JNIEnv and jObject. JavaVM * jvm; JNIEnv * myEnv; jobject myobj; JNIEXPORT void JNICALL Java_org_api_init (JNIEnv *env, jobject jObj) { myEnv = env; myobj = jObj; } I also have a GLSurface renderer and it eventually calls the C++ shared object mentioned above on a different thread, the GLThread. I am then trying to call back into my original Java object using the jobject I saved initially, but I think because I am on the GLThread, I get the following error. W/dalvikvm(16101): JNI WARNING: 0x41ded218 is not a valid JNI reference I/dalvikvm(16101): "GLThread 981" prio=5 tid=15 RUNNABLE I/dalvikvm(16101): | group="main" sCount=0 dsCount=0 obj=0x41d6e220 self=0x5cb11078 I/dalvikvm(16101): | sysTid=16133 nice=0 sched=0/0 cgrp=apps handle=1555429136 I/dalvikvm(16101): | schedstat=( 0 0 0 ) utm=42 stm=32 core=1 The code calling back into Java : void setData() { jvm->AttachCurrentThread(&myEnv, 0); jclass javaClass = myEnv->FindClass("com/myapp/myClass"); if(javaClass == NULL){ LOGD("ERROR - cant find class"); } jmethodID method = myEnv->GetMethodID(javaClass, "updateDataModel", "()V"); if(method == NULL){ LOGD("ERROR - cant access method"); } // this works, but its a new java object //jobject myobj2 = myEnv->NewObject(javaClass, method); //this is where the crash occurs myEnv->CallVoidMethod(myobj, method, NULL); } If instead I create a new jObject using env-NewObject, I can succuessfully call back into Java, but it is a new object and I dont want that. I need to get back to my original Java Object. Is it a matter of switching threads before I call back into Java? If so, how do I do so ?

    Read the article

  • Android - Where to store generated bitmaps?

    - by Josh
    I've got an app which dynamically generates anywhere from 6 to 100 small bitmaps for the user to move around the screen in a given session. I currently generate them in onCreate and store them to the sd card, so that after an orientation change I can grab them out of external storage and display them again. However, this takes time (the loading) and I'd like to keep the bitmap references around between lifecyle changes for quicker access. My question is, is there a better place to store my generated bitmaps? I was thinking about creating a static storage library in my base activity, something that would only need to be reloaded when the app is completely removed from memory (shutdown, other apps need resources, 30 minute restart, etc). Ideally, I'd like the user to be able to back out to the title screen, click a "Resume" button, and in onCreate I just have access to those resident bitmap references instead of having to load them from storage again. For this reason I don't think Activity.onRetainNonConfigurationInstance is what I need. Alternatively, is there a better way to handle multiple generated bitmaps than what I'm doing or the plan I described?

    Read the article

  • Writing a synchronized thread-safety wrapper for NavigableMap

    - by polygenelubricants
    java.util.Collections currently provide the following utility methods for creating synchronized wrapper for various collection interfaces: synchronizedCollection(Collection<T> c) synchronizedList(List<T> list) synchronizedMap(Map<K,V> m) synchronizedSet(Set<T> s) synchronizedSortedMap(SortedMap<K,V> m) synchronizedSortedSet(SortedSet<T> s) Analogously, it also has 6 unmodifiedXXX overloads. The glaring omission here are the utility methods for NavigableMap<K,V>. It's true that it extends SortedMap, but so does SortedSet extends Set, and Set extends Collection, and Collections have dedicated utility methods for SortedSet and Set. Presumably NavigableMap is a useful abstraction, or else it wouldn't have been there in the first place, and yet there are no utility methods for it. So the questions are: Is there a specific reason why Collections doesn't provide utility methods for NavigableMap? How would you write your own synchronized wrapper for NavigableMap? Glancing at the source code for OpenJDK version of Collections.java seems to suggest that this is just a "mechanical" process Is it true that in general you can add synchronized thread-safetiness feature like this? If it's such a mechanical process, can it be automated? (Eclipse plug-in, etc) Is this code repetition necessary, or could it have been avoided by a different OOP design pattern?

    Read the article

  • EventAggregator, is it thread-safe?

    - by pfaz
    Is this thread-safe? The EventAggregator in Prism is a very simple class with only one method. I was surprised when I noticed that there was no lock around the null check and creation of a new type to add to the private _events collection. If two threads called GetEvent simultaneously for the same type (before it exists in _events) it looks like this would result in two entries in the collection. /// <summary> /// Gets the single instance of the event managed by this EventAggregator. Multiple calls to this method with the same <typeparamref name="TEventType"/> returns the same event instance. /// </summary> /// <typeparam name="TEventType">The type of event to get. This must inherit from <see cref="EventBase"/>.</typeparam> /// <returns>A singleton instance of an event object of type <typeparamref name="TEventType"/>.</returns> public TEventType GetEvent<TEventType>() where TEventType : EventBase { TEventType eventInstance = _events.FirstOrDefault(evt => evt.GetType() == typeof(TEventType)) as TEventType; if (eventInstance == null) { eventInstance = Activator.CreateInstance<TEventType>(); _events.Add(eventInstance); } return eventInstance; }

    Read the article

  • "Thread was being aborted" 0n large dataset

    - by Donaldinio
    I am trying to process 114,000 rows in a dataset (populated from an oracle database). I am hitting an error at around the 600 mark - "Thread was being aborted". All I am doing is reading the dataset, and I still hit the issue. Is this too much data for a dataset? It seems to load into the dataset ok though. I welcome any better ways to process this amount of data. rootTermsTable = entKw.GetRootKeywordsByCategory(catID); for (int k = 0; k < rootTermsTable.Rows.Count; k++) { string keywordID = rootTermsTable.Rows[k]["IK_DBKEY"].ToString(); ... } public DataTable GetKeywordsByCategory(string categoryID) { DbProviderFactory provider = DbProviderFactories.GetFactory(connectionProvider); DbConnection con = provider.CreateConnection(); con.ConnectionString = connectionString; DbCommand com = provider.CreateCommand(); com.Connection = con; com.CommandText = string.Format("Select * From icm_keyword WHERE (IK_IC_DBKEY = {0})",categoryID); com.CommandType = CommandType.Text; DataSet ds = new DataSet(); DbDataAdapter ad = provider.CreateDataAdapter(); ad.SelectCommand = com; con.Open(); ad.Fill(ds); con.Close(); DataTable dt = new DataTable(); dt = ds.Tables[0]; return dt; //return ds.Tables[0].DefaultView; }

    Read the article

  • StructureMap Configuration Per Thread/Request for the Full Dependency Chain

    - by Phil Sandler
    I've been using Structuremap for a few months now, and it has worked great on some smaller (green field) projects. Most of the configurations I have had to set up involve a single implementation per interface. In the cases where I needed to choose a specific implementation at runtime, I created a factory class that used ObjectFactory.GetNamedInstance<(). In the smaller projects, there were few enough of these cases where I was comfortable with the references to ObjectFactory. My understanding is that you want to limit these references as much as possible, and ideally only reference the ObjectFactory once. I am working to refactor a larger codebase to use IOC/StructureMap, and am finding that I may need many of these factory classes with ObjectFactory references to get what I need. Essentially, I am creating a "root service" with the ObjectFactory, so that everything in the dependency chain is managed by the container. The root service is created by name (i.e. "BuildCar", "BuildTruck"), and the services needed deeper in the dependency chain could also be constructed using the same name--so the "IAttachWheels" service could vary based on whether a car or truck is being built. Since the class that depends on IAttachWheels is the same in both configurations, I don't think I can use ConstructedBy in the registry to choose the implementation. Also, to be clear, the IAttachWheels implementations need to be managed by the container as well, because the dependency chain runs fairly deep. I looked briefly at Profiles as an option, but read (here on StackOverflow) that changing profiles essentially changes implementations for all threads. Is there a feature that is similar to profiles that is thread/request specific? Is the factory class that references ObjectFactory approach the right way to go? Any thoughts would be appreciated.

    Read the article

  • Thread-Safe lazy instantiating using MEF

    - by Xaqron
    // Member Variable private static readonly object _syncLock = new object(); // Now inside a static method foreach (var lazyObject in plugins) { if ((string)lazyObject.Metadata["key"] = "something") { lock (_syncLock) { // It seems the `IsValueCreated` is not up-to-date if (!lazyObject.IsValueCreated) lazyObject.value.DoSomething(); } return lazyObject.value; } } Here I need synchronized access per loop. There are many threads iterating this loop and based on the key they are looking for, a lazy instance is created and returned. lazyObject should not be created more that one time. Although Lazy class is for doing so and despite of the used lock, under high threading I have more than one instance created (I track this with a Interlocked.Increment on a volatile static int and log it somewhere). The problem is I don't have access to definition of Lazy and MEF defines how the Lazy class create objects. I should notice the CompositionContainer has a thread-safe option in constructor which is already used. My questions: 1) Why the lock doesn't work ? 2) Should I use an array of locks instead of one lock for performance improvement ?

    Read the article

  • How to multi-thread this?

    - by WilliamKF
    I wish to have two threads. The first thread1 occasionally calls the following pseudo function: void waitForThread2() { if (thread2 is not idle) { return; } notifyThread2IamReady(); while (thread2IsExclusive) { } } The second thread2 is forever in the following pseudo loop: for (;;) { Notify thread1 I am idle. while (!thread1IsReady()) { } Notify thread1 I am exclusive. Do some work while thread1 is blocked. Notify thread1 I am busy. Do some work in parallel with thread1. } What is the best way to write this such that both thread1 and thread2 are kept as busy as possible on a machine with multiple cores. I would like to avoid long delays between notification in one thread and detection by the other. I tried using pthread condition variables but found the delay between thread2 doing 'notify thread1 I am busy' and the loop in waitForThread2() on thear2IsExclusive() can be up to almost one second delay. I then tried using a volatile sig_atomic_t shared variable to control the same, but something is going wrong, so I must not be doing it correctly.

    Read the article

  • How to show and update popup in 1 thread

    - by user3713986
    I have 1 app. 2 Forms are MainFrm and PopupFrm, 1 thread to update some information to PopupFrm Now to update PopupFrm i use: In MainFrm.cs private PopupFrm mypop; MainFrm() { .... PopupFrm mypop= new PopupFrm(); mypop.Show(); } MyThread() { Process GetData();... mypop.Update(); ... } In PopupFrm.cs public void Update() { this.Invoke((MethodInvoker)delegate .... }); } Problem here that mypopup alway display when MainFrm display (Start application not when has data to update). So i change MainFrm.cs to : private PopupFrm mypop; private bool firstdisplay=false; MainFrm() { .... PopupFrm mypop= new PopupFrm(); //mypop.Show(); } MyThread() { Process GetData();... if(!firstdisplay) { mypop.Show(); firstdisplay=true; } mypop.Update(); ... } But it can not update Popup GUI. So how can i fix this issue ? Thanks all.

    Read the article

  • bluetooth BluetoothSocket.connect() thread. how to close this thread

    - by Hia
    I am trying to make an android app that makes connection with bluetooth device. It works fine but when I call BluetoothSocket.connect() and it is not able to connect to devise its blocking. The thread and does not throw any exception. So when I try to close application while connect() is running its not responding. How can I cancel it? Used BluetoothSocket.close() in ... but still its not working for me. protected void simpleComm(Integer port) { // The documents tell us to cancel the discovery process. try { Method m = mmDevice.getClass().getMethod("createRfcommSocket", new Class[] { int.class }); mmSocket = (BluetoothSocket) m.invoke(mmDevice, port); mmSocket.connect(); // <== blocks untill is not connected Log.d(TAG, " connection success==="); }catch(Exception e){ if (!abort) { connectionFailed(); // Close the socket try { mmSocket.close(); // Start the service over to restart listening mode BluetoothService.this.start(); } catch (IOException e2) { Log.e(TAG,"unable to close() socket during connection failure", e2); } } return; } } public void cancel() { try { abort = true; mmSocket.close(); } catch (IOException e) { Log.e(TAG, "close() of connect socket failed", e); } }

    Read the article

  • Best choice for a personal "online backup" in Europe

    - by marc_s
    I'm looking for an online backup solution for personal use - besides all the usual requirements (like not too expensive, since it's for personal use), I'd like to add two requirements to it: data center should be in Europe (I don't want my personal data stored in the US, when the next crazed president comes along and wants to confiscate and rifle through everybody's files.....) the online backup store should be accessible through a drive letter in cmd.exe So far, I've looked at a few services, but none have totally convinced me: Dropbox is looking ok, but they insist on creating a silly "My Dropbox" directory in my data path - and there's no way I can choose that name. Sorry - "My everything" is for dummies - I don't like that, I like to name my files and folders according to my liking LiveDrive is OK, too - they offer European storage, drive letter and all - but those drive letters are only available in the Windows Explorer - and not on the cmd.exe command line :-( and since I do 99% of my work on the command line, this is a major drawback..... Any other services I haven't looked at worth checking out? Marc

    Read the article

  • How to check CPU temperature on a HP P2000?

    - by Pavel
    I have a HP StorageWorks MSA Storage P2000 G3 SAS. show sensor-status gives something like # show sensor-status Sensor Name Value Status ---------------------------------------------------- On-Board Temperature 1-Ctlr A 53 C OK On-Board Temperature 1-Ctlr B 52 C OK On-Board Temperature 2-Ctlr A 61 C OK On-Board Temperature 2-Ctlr B 63 C OK On-Board Temperature 3-Ctlr A 53 C OK On-Board Temperature 3-Ctlr B 53 C OK Disk Controller Temp-Ctlr A 34 C OK Disk Controller Temp-Ctlr B 32 C OK Memory Controller Temp-Ctlr A 66 C OK Memory Controller Temp-Ctlr B 67 C OK [...] Overall Unit Status OK OK Temperature Loc: upper-IOM A 40 C OK Temperature Loc: lower-IOM B 38 C OK Temperature Loc: left-PSU 36 C OK Temperature Loc: right-PSU 40 C OK [...] is one of the values the CPU/FPGA temperature? Or, if not, how do I get it? Thanks!

    Read the article

  • "Unable to initialize module" warning after update PHP on CentOS 5.4

    - by ohho
    After upgrading PHP from 5.1x to 5.2.10, there are a lot of warning when php -v: [root@localhost ~]# php -v PHP Warning: PHP Startup: fileinfo: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP Warning: PHP Startup: mcrypt: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP Warning: PHP Startup: memcache: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP Warning: PHP Startup: mhash: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP Warning: PHP Startup: mssql: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP Warning: PHP Startup: readline: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP Warning: PHP Startup: tidy: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP 5.2.10 (cli) (built: Nov 13 2009 11:24:03) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2009 Zend Technologies How can I fix that? Thanks!

    Read the article

  • Single/Mulitple LUN for vmware vm hosting

    - by Yucong Sun
    I'm building a iscsi storage system for hosting about ~500 Vmware vm running concurrently. And I have a disk array with 15 disks, I only need moderate write performance but preferably not SPOFed. so, that leaves me with RAID1 / RAID10 , I have couple choices: 1) 3x LUN 4disk RAID10 + 3 hot-swap 2) 1x LUN 14disk RAID10 + 1 hot-swap 3) 7x LUN 2disk RAID1 + 1 host-swap Which way is better? Is there a real problem running 500 vms on single LUN? and would it be better to resort to 7 LUns so each VM is better isolated with each other?

    Read the article

  • Improving SAS multipath to JBOD performance on Linux

    - by user36825
    Hello all I'm trying to optimize a storage setup on some Sun hardware with Linux. Any thoughts would be greatly appreciated. We have the following hardware: Sun Blade X6270 2* LSISAS1068E SAS controllers 2* Sun J4400 JBODs with 1 TB disks (24 disks per JBOD) Fedora Core 12 2.6.33 release kernel from FC13 (also tried with latest 2.6.31 kernel from FC12, same results) Here's the datasheet for the SAS hardware: http://www.sun.com/storage/storage_networking/hba/sas/PCIe.pdf It's using PCI Express 1.0a, 8x lanes. With a bandwidth of 250 MB/sec per lane, we should be able to do 2000 MB/sec per SAS controller. Each controller can do 3 Gb/sec per port and has two 4 port PHYs. We connect both PHYs from a controller to a JBOD. So between the JBOD and the controller we have 2 PHYs * 4 SAS ports * 3 Gb/sec = 24 Gb/sec of bandwidth, which is more than the PCI Express bandwidth. With write caching enabled and when doing big writes, each disk can sustain about 80 MB/sec (near the start of the disk). With 24 disks, that means we should be able to do 1920 MB/sec per JBOD. multipath { rr_min_io 100 uid 0 path_grouping_policy multibus failback manual path_selector "round-robin 0" rr_weight priorities alias somealias no_path_retry queue mode 0644 gid 0 wwid somewwid } I tried values of 50, 100, 1000 for rr_min_io, but it doesn't seem to make much difference. Along with varying rr_min_io I tried adding some delay between starting the dd's to prevent all of them writing over the same PHY at the same time, but this didn't make any difference, so I think the I/O's are getting properly spread out. According to /proc/interrupts, the SAS controllers are using a "IR-IO-APIC-fasteoi" interrupt scheme. For some reason only core #0 in the machine is handling these interrupts. I can improve performance slightly by assigning a separate core to handle the interrupts for each SAS controller: echo 2 /proc/irq/24/smp_affinity echo 4 /proc/irq/26/smp_affinity Using dd to write to the disk generates "Function call interrupts" (no idea what these are), which are handled by core #4, so I keep other processes off this core too. I run 48 dd's (one for each disk), assigning them to cores not dealing with interrupts like so: taskset -c somecore dd if=/dev/zero of=/dev/mapper/mpathx oflag=direct bs=128M oflag=direct prevents any kind of buffer cache from getting involved. None of my cores seem maxed out. The cores dealing with interrupts are mostly idle and all the other cores are waiting on I/O as one would expect. Cpu0 : 0.0%us, 1.0%sy, 0.0%ni, 91.2%id, 7.5%wa, 0.0%hi, 0.2%si, 0.0%st Cpu1 : 0.0%us, 0.8%sy, 0.0%ni, 93.0%id, 0.2%wa, 0.0%hi, 6.0%si, 0.0%st Cpu2 : 0.0%us, 0.6%sy, 0.0%ni, 94.4%id, 0.1%wa, 0.0%hi, 4.8%si, 0.0%st Cpu3 : 0.0%us, 7.5%sy, 0.0%ni, 36.3%id, 56.1%wa, 0.0%hi, 0.0%si, 0.0%st Cpu4 : 0.0%us, 1.3%sy, 0.0%ni, 85.7%id, 4.9%wa, 0.0%hi, 8.1%si, 0.0%st Cpu5 : 0.1%us, 5.5%sy, 0.0%ni, 36.2%id, 58.3%wa, 0.0%hi, 0.0%si, 0.0%st Cpu6 : 0.0%us, 5.0%sy, 0.0%ni, 36.3%id, 58.7%wa, 0.0%hi, 0.0%si, 0.0%st Cpu7 : 0.0%us, 5.1%sy, 0.0%ni, 36.3%id, 58.5%wa, 0.0%hi, 0.0%si, 0.0%st Cpu8 : 0.1%us, 8.3%sy, 0.0%ni, 27.2%id, 64.4%wa, 0.0%hi, 0.0%si, 0.0%st Cpu9 : 0.1%us, 7.9%sy, 0.0%ni, 36.2%id, 55.8%wa, 0.0%hi, 0.0%si, 0.0%st Cpu10 : 0.0%us, 7.8%sy, 0.0%ni, 36.2%id, 56.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu11 : 0.0%us, 7.3%sy, 0.0%ni, 36.3%id, 56.4%wa, 0.0%hi, 0.0%si, 0.0%st Cpu12 : 0.0%us, 5.6%sy, 0.0%ni, 33.1%id, 61.2%wa, 0.0%hi, 0.0%si, 0.0%st Cpu13 : 0.1%us, 5.3%sy, 0.0%ni, 36.1%id, 58.5%wa, 0.0%hi, 0.0%si, 0.0%st Cpu14 : 0.0%us, 4.9%sy, 0.0%ni, 36.4%id, 58.7%wa, 0.0%hi, 0.0%si, 0.0%st Cpu15 : 0.1%us, 5.4%sy, 0.0%ni, 36.5%id, 58.1%wa, 0.0%hi, 0.0%si, 0.0%st Given all this, the throughput reported by running "dstat 10" is in the range of 2200-2300 MB/sec. Given the math above I would expect something in the range of 2*1920 ~= 3600+ MB/sec. Does anybody have any idea where my missing bandwidth went? Thanks!

    Read the article

  • Gluster bricks are offline and errors in logs

    - by Roman Newaza
    I have substituted all the IP addresses with hostnames and renamed configs (IP to hostname) in /var/lib/glusterd by my shell script. After that I restarted Gluster Daemon and the volume. Then I checked if all the peers are connected: root@GlusterNode1a:~# gluster peer status Number of Peers: 3 Hostname: gluster-1b Uuid: 47f469e2-907a-4518-b6a4-f44878761fd2 State: Peer in Cluster (Connected) Hostname: gluster-2b Uuid: dc3a3ff7-9e30-44ac-9d15-00f9dab4d8b9 State: Peer in Cluster (Connected) Hostname: gluster-2a Uuid: 72405811-15a0-456b-86bb-1589058ff89b State: Peer in Cluster (Connected) I could see mounted volumes size change on all the nodes when I execute df command, so new data is coming. But recently I noticed error messages in app log: copy(/storage/152627/dat): failed to open stream: Structure needs cleaning readfile(/storage/1438227/dat): failed to open stream: Input/output error unlink(/storage/189457/23/dat): No such file or directory Finally, I have found out some bricks are offline: root@GlusterNode1a:~# gluster volume status Status of volume: storage Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick gluster-1a:/storage/1a 24009 Y 1326 Brick gluster-1b:/storage/1b 24009 N N/A Brick gluster-2a:/storage/2a 24009 N N/A Brick gluster-2b:/storage/2b 24009 N N/A Brick gluster-1a:/storage/3a 24011 Y 1332 Brick gluster-1b:/storage/3b 24011 N N/A Brick gluster-2a:/storage/4a 24011 N N/A Brick gluster-2b:/storage/4b 24011 N N/A NFS Server on localhost 38467 Y 24670 Self-heal Daemon on localhost N/A Y 24676 NFS Server on gluster-2b 38467 Y 4339 Self-heal Daemon on gluster-2b N/A Y 4345 NFS Server on gluster-2a 38467 Y 1392 Self-heal Daemon on gluster-2a N/A Y 1402 NFS Server on gluster-1b 38467 Y 2435 Self-heal Daemon on gluster-1b N/A Y 2441 What can I do about that? I need to fix it. Note: CPU and Network usage of all the four nodes are about the same.

    Read the article

  • How do I use an internal SSD as a scratch disk for FCP X?

    - by andrewb
    I'm contemplating setting up my MacBook Air as a video editing machine. If I do this, I'll upgrade to a 256 GB SSD, and I should be able to keep around 100 GB or more free for video editing. The video files would of course be stored externally, but save purchasing some expensive Thunderbolt RAID device (which I suppose is gradually becoming more of an option), it will be slow for read/writes. How can I have a set up where I take advantage of my SSD's speed for a scratch disk/cache for FCP X, but still have the TB(s) of storage of externals? I don't want to have to be moving files constantly back and forth, this is about saving time not wasting it.

    Read the article

  • Do Seagate Momentus XT SSD Hybrid drives perform better than a good hard drive + flash on ReadyBoost

    - by Chris W. Rea
    Seagate has released a product called the Momentus XT Solid State Hybrid Drive. At a glance, this looks exactly like what Windows ReadyBoost attempts to do with software at the OS level: Pairing the benefits of a large hard drive together with the performance of solid-state flash memory. Does the Momentus XT out-perform a similar ad-hoc pairing of a decent hard drive with similar flash memory storage under Windows ReadyBoost? Other than the obvious "a hardware implementation ought to be faster than a software implementation", why would ReadyBoost not be able to perform as well as such a hybrid device?

    Read the article

  • Is a "failed" RAID 5 disk really no good?

    - by GregH
    This is my first venture in to setting up RAID on my home system. After installing 3 x 1TB drives in RAID 5, everything was running well for about 10 days. Then, the Intel Rapid Storage Technology software that monitors the disks and RAID on my system, told me that I had a failed drive. I marked the drive as good, and the array rebuilt. Then a day or so later I got a notification again, that the drive failed. I'm just wondering if this drive really is no good or if there is something I can do to get it working again? Or, do I just need to return it to the store where I bought it and get a replacement?

    Read the article

  • How to use new disk space after extend attached SAN disk

    - by Edu Lomeli
    I have extended the space of my SAN vDisk from 1TB to 1.2TB, but Windows Explorer doesn't show the new size. After resize the vdisk in the SAN Manager, the Disk Management utility shows the 200GB unallocated space, then I resized the partition to use the unallocated space to get a 1.2TB partition, the process was succesfully, but in the Windows File Explorer the disk still have 1TB of total space. Win version: Windows Storage Server Enterprise 2007. Do I need to restart the server? How can I use the new extra space without rebooting?

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >