Search Results

Search found 20 results on 1 pages for 'm ric'.

Page 1/1 | 1 

  • Chrome 6 déjà disponible pour les développeurs, deux semaines à peine après une beta de Chrome 5 ric

    Mise à jour du 18/05/10 Chrome 6 déjà disponible Pour les développeurs deux semaines à peine après la beta de Chrome 5 Les versions de Chrome, le navigateur de Google, se succèdent en continue. A peine deux semaines après l'arrivée d'une beta de Chrome 5 riche en nouveautés (lire ci-avant), l'équipe de développement vient d'annoncer celle de Chrome 6 sur le canal développeur. Pour l'instant, les évolutions par rapport à Chrome 5 sont minimes (par exemple, amélioration de la copie d'adresses Web et de la gestion de la barre d'adresse). Mais cette version 6 indique surtout que le travail sur la version 5 co...

    Read the article

  • SharpSSH gets stuck in an infinite stream read in C# SSH app

    - by Ric Coles
    Afternoon all, I'm having a small problem with the SharpSSH library for .Net (see http://www.tamirgal.com/blog/page/SharpSSH.aspx) SshStream ssh = new SshStream("some ip address", "some username", "some password"); ssh.Prompt = "\n"; ssh.RemoveTerminalEmulationCharacters = true; ssh.Write("ssh some ip address"); // Don't care about this response ssh.ReadResponse(); ssh.Write("lss /mnt/sata[1-4]"); // Don't care about this response (for now) ssh.ReadResponse(); // while the stream can be read while (ssh.CanRead) { Console.WriteLine(ssh.ReadResponse()); } ssh.Close(); As you can see, it's fairly straight forward. However, when the while-loop gets stepped into, it won't break out of the loop when everything has been printed to the console and there is nothing else to read. Is there anyway I can manually force it to break when there is nothing else to be read? Cheers, Ric

    Read the article

  • How to do a 3-tier using PHP [closed]

    - by Ric
    I have a requirement from a client for my PHP Web application to be 3-tier. For example, I would have a web server on Apache in the DMZ, but it should NOT contain any DB connections. It should connect to a Middle server that would host the business objects but be behind the firewall. Then those objects connect to my SQL cluster on another server. I have actually done this using .NET, but I am not sure how to setup my stack using PHP. I suppose I could have my UI front tier call the middle tier using REST based web services if I create my middle tier as a second web server, but this seems overly complex. The main reason for this is advanced security: we can not have any passwords on the DMZ first tier web server. The second reason is scalability - to have multiple server on different tiers that can handle the requests. The Last reason is for deployment - it is easier if I can take one set of servers offline for testing before putting them back in production. Is there a open source project that shows how to do this? The only example I can find is the web server hosting files from a shared drive on another machine (kind of how DotNetNuke pretends to be 3-tier), but that is NOT secure.

    Read the article

  • Ubunto 12.10 Boots to purple screen

    - by Ric
    I know this question has been tackled in a couple different threads but I've tried what I could from those and have not resolved the issue. I have just a basic understanding of this system so feel free to talk down to me or explain this like you would to a 5 year old. Let's start from the beginning. My son has a computer built by an IT friend of mine (we moved so he can't help any more). It had Windows XP running on it and it just stopped working correctly. This same friend had built a laptop for me with Ubuntu which I liked so I thought I'd put a new OS on my sons computer and it may work better. I downloaded Ubuntu 12.10 onto a USB drive and loaded it onto his computer. I followed all the prompts, it installed, I restarted the computer, it gives me the option of which OS to pick. I pick Ubuntu and it seemingly loads. The desktop comes up with just the basic pinkish Ubuntu background but that is it. There are no icons. I can't right click anywhere to create a file. Left clicking the mouse does not create a square when moved. Alt + F2 doesn't do anything. I can open a terminal but any of the commands I have seen in previous threads do not correct any issues. What else can I do, or what resources are available to fix this problem? I don't know if there are additional files on the USB drive that I need to access or what. Also, one of the problems we were having with my sons computer is that windows would only load to a blank screen. It runs accordingly in safemode and my install of Ubuntu was through safemode of Windows XP.

    Read the article

  • VPN providers and connection from a known location

    - by Ric
    I am interested using a VPN service. I want to visually monitor online advertisements in different location, Germany, France Nederlands and the UK. I would like a VPN provider which both connects from these locations to the website of interest. It should also allow me to choose the location of the server I connect from. A big plus would be the ability to compare the website from different connection side by side Do any providers allow this?

    Read the article

  • How to reverse-i-search back and forth?

    - by m-ric
    I use reverse-i-search often, and that's cool. Sometime though when pressing Ctrl+r multiple times, I pass the command I am actually looking for. Because Ctrl+r searches backward in history, from newest to oldest, I have to: cancel, search again and stop exactly at the command, without passing it. While in reverse-i-search prompt, is it possible to search forward, i.e. from where I stand to newest. I naively tried Ctrl+shift+r, no luck. I heard about Ctrl+g but this is not what I am expecting here. Anyone has an idea?

    Read the article

  • OS X superuser folders automatically created. Perusers launchd process appears to kill 501

    - by Ric Pen
    New Apple laptop OSX 10.8.2. I have used OS X but many years previously, and am not familiar with subtleties or changes in com.apple.launchd.peruser.x... I have previously (and in retrospect, foolishly) made changes to these rapidly spawned new peruser accounts (my initial reaction was that if ipfw was disabled, then I might well be under hacker attack, which I have dealt with, years ago), but I believe I was wrong, and the results of my efforts at preserving the system's integrity have in fact been destructive, overreactive, and have resulted in much work to restore. My understanding from other posts is that superuser protocols have changed quite dramatically since I bought the first developer version of OS X many years ago. Haven't developed on Apple much since then, w/ exception of WebObjects (IMO, much underrated at that time, and was more user friendly than ASP (prior to .NET, I vaguely recall). Creation of apparently nasty peruser folders appear to confound 501 process, which logs inability to find firewall (ipfw). Can someone help me with this? I am concerned that either the system is improperly configured, an application was improperly installed (although there is little here beyond Apple's SDK, which I find quite accommodating and intuitive). Still, I am a novice, only sporadically develop at this time, and would really just like to see this system running happily. Please offer assistance, in the form of potential info sources, or if you have had a similar experience, then perhaps scripts to suss out this issue. I do not wish to damage the system, but Apple's Developer connection and discussion threads do not appear to have dealt with this particular issue recently... Although I may well have missed something you have not - please apprise. Any assistance on this issue is very much appreciated - by an old guy, who wants to do some things which were fun about 20 years ago.

    Read the article

  • How to use c++0x thread in Android NDK?

    - by m-ric
    I am trying to compile this simple program with android-ndk-r8b: jni/hello_jni.cpp #include <iostream> #include <thread> void hello() { std::cout << "Hi i'm a thread!!!" << std::endl; } int main() { std::thread th(hello); th.join(); return 0; } jni/Application.mk APP_OPTIM := release APP_MODULES := hello_thread APP_STL := gnustl_static jni/Android.mk LOCAL_PATH := $(call my-dir) include $(CLEAR_VARS) LOCAL_CPPFLAGS += -std=c++0x -frtti LOCAL_MODULE := hello_thread LOCAL_LDLIBS := -L$(SYSROOT)/usr/lib -pthread LOCAL_SRC_FILES := hello_thread.cpp include $(BUILD_EXECUTABLE) ndk-build returns me an error arguin that 'thread' is not a member of 'std'. I issued ndk-build -n to get the compilation command and issued it alone in my shell: /home/evigier/android-ndk-r8b/toolchains/arm-linux-androideabi-4.6/prebuilt/linux-x86/bin/arm-linux-androideabi-g++ -MMD -MP -MF /home/evigier/eclipse_workspace/hello_thread/obj/local/armeabi/objs/hello_thread/hello_thread.o.d -fpic -ffunction-sections -funwind-tables -fstack-protector -D__ARM_ARCH_5__ -D__ARM_ARCH_5T__ -D__ARM_ARCH_5E__ -D__ARM_ARCH_5TE__ -march=armv5te -mtune=xscale -msoft-float -fno-exceptions -fno-rtti -mthumb -Os -fomit-frame-pointer -fno-strict-aliasing -finline-limit=64 -I/home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/include -I/home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/libs/armeabi/include -I/home/evigier/eclipse_workspace/hello_thread/jni -DANDROID -Wa,--noexecstack -std=c++0x -frtti -O2 -DNDEBUG -g -I/home/evigier/android-ndk-r8b/platforms/android-14/arch-arm/usr/include -c /home/evigier/eclipse_workspace/hello_thread/jni/hello_thread.cpp -o /home/evigier/eclipse_workspace/hello_thread/obj/local/armeabi/objs/hello_thread/hello_thread.o Compile++ thumb : hello_thread <= hello_thread.cpp In file included from /home/evigier/android-ndk-r8b/platforms/android-14/arch-arm/usr/include/stdio.h:55:0, from /home/evigier/android-ndk-r8b/platforms/android-14/arch-arm/usr/include/wchar.h:33, from /home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/include/cwchar:46, from /home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/include/bits/postypes.h:42, from /home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/include/iosfwd:42, from /home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/include/ios:39, from /home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/include/ostream:40, from /home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/include/iostream:40, from jni/hello_thread.cpp:4: /home/evigier/android-ndk-r8b/platforms/android-14/arch-arm/usr/include/sys/types.h:124:9: error: 'uint64_t' does not name a type /home/evigier/eclipse_workspace/hello_thread/jni/hello_thread.cpp: In function 'int main()': /home/evigier/eclipse_workspace/hello_thread/jni/hello_thread.cpp:14:5: error: 'thread' is not a member of 'std' /home/evigier/eclipse_workspace/hello_thread/jni/hello_thread.cpp:14:17: error: expected ';' before 'th' /home/evigier/eclipse_workspace/hello_thread/jni/hello_thread.cpp:15:5: error: 'th' was not declared in this scope I read a lot of threads/questions about POSIX threads and C++ threads, but still cannot find my answer. My arm-linux-androideabi/include/c++/4.6/thread file defines class thread in std only: #if defined(_GLIBCXX_HAS_GTHREADS) && defined(_GLIBCXX_USE_C99_STDINT_TR1) They don't seem to be defined in my sdk (c++config.h). But how can I possibly turn them on safely? Do i need to compile my own toolchain to use (non-p)threads? My host computer is : Linux evigier-ThinkPad-X220 3.0.0-17-generic #30-Ubuntu SMP Thu Mar 8 20:45:39 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux

    Read the article

  • MD5 file processing

    - by Ric Coles
    Good morning all, I'm working on an MD5 file integrity check tool in C#. How long should it take for a file to be given an MD5 checksum value? For example, if I try to get a 2gb .mpg file, it is taking around 5 mins+ each time. This seems overly long. Am I just being impatient? Below is the code I'm running public string getHash(String @fileLocation) { FileStream fs = new FileStream(@fileLocation, FileMode.Open); HashAlgorithm alg = new HMACMD5(); byte[] hashValue = alg.ComputeHash(fs); string md5Result = ""; foreach (byte x in hashValue) { md5Result += x; } fs.Close(); return md5Result; } Any suggestions will be appreciated. Regards

    Read the article

  • UnauthorizedAccessException cannot resolve Directory.GetFiles failure

    - by Ric Coles
    Hi all, Directory.GetFiles method fails on the first encounter with a folder it has no access rights to. The method throws an UnauthorizedAccessException (which can be caught) but by the time this is done, the method has already failed/terminated. The code I am using is listed below: try { // looks in stated directory and returns the path of all files found getFiles = Directory.GetFiles(@directoryToSearch, filetype, SearchOption.AllDirectories); } catch (UnauthorizedAccessException) { } As far as I am aware, there is no way to check beforehand whether a certain folder has access rights defined. In my example, I'm searching on a disk across a network and when I come across a root access only folder, my program fails. I have searched for a few days now for any sort of resolve, but this problem doesn't seem to be very prevalent on here or Google. I look forward to any suggestions you may have, Regards

    Read the article

  • Right-Time Retail Part 2

    - by David Dorf
    This is part two of the three-part series. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Integration Of course these real-time enabling technologies are only as good as the systems that utilize them, and it only takes one bottleneck to slow everyone else down. What good is an immediate stock-out notification if the supply chain can’t react until tomorrow? Since being formed in 2006, Oracle Retail has been not only adding more integrations between systems, but also modernizing integrations for appropriate speed. Notice I tossed in the word “appropriate.” Not everything needs to be real-time – again, we’re talking about Right-Time Retail. The speed of data capture, analysis, and execution must be synchronized or you’re wasting effort. Unfortunately, there isn’t an enterprise-wide dial that you can crank-up for your estate. You’ll need to improve things piecemeal, with people and processes as limiting factors while choosing the appropriate types of integrations. There are three integration styles we see in the retail industry. First is batch. I know, the word “batch” just sounds slow, but this pattern is less about velocity and more about volume. When there are large amounts of data to be moved, you’ll want to use batch processes. Our technology of choice here is Oracle Data Integrator (ODI), which provides a fast version of Extract-Transform-Load (ETL). Instead of the three-step process, the load and transform steps are combined to save time. ODI is a key technology for moving data into Retail Analytics where we can apply science. Performing analytics on each sale as it occurs doesn’t make any sense, so we batch up a statistically significant amount and submit all at once. The second style is fire-and-forget. For some types of data, we want the data to arrive ASAP but immediacy is not necessary. Speed is less important than guaranteed delivery, so we use message-oriented middleware available in both Weblogic and the Oracle database. For example, Point-of-Service transactions are queued for delivery to Central Office at corporate. If the network is offline, those transactions remain in the queue and will be delivered when the network returns. Transactions cannot be lost and they must be delivered in order. (Ever tried processing a return before the sale?) To enhance the standard queues, we offer the Retail Integration Bus (RIB) to help the management and monitoring of fire-and-forget messaging in the enterprise. The third style is request-response and is most commonly implemented as Web services. This is a synchronous message where the sender waits for a response. In this situation, the volume of data is small, guaranteed delivery is not necessary, but speed is very important. Examples include the website checking inventory, a price lookup, or processing a credit card authorization. The Oracle Service Bus (OSB) typically handles the routing of such messages, and we’ve enhanced its abilities with the Retail Service Backbone (RSB). To better understand these integration patterns and where they apply within the retail enterprise, we’re providing the Retail Reference Library (RRL) at no charge to Oracle Retail customers. The library is composed of a large number of industry business processes, including those necessary to support Commerce Anywhere, as well as detailed architectural diagrams. These diagrams allow implementers to understand the systems involved in integrations and the specific data payloads. Furthermore, with our upcoming release we’ll be providing a new tool called the Retail Integration Console (RIC) that allows IT to monitor and manage integrations from a single point. Using RIC, retailers can quickly discern where integration activity is occurring, volume statistics, average response times, and errors. The dashboards provide the ability to dive down into the architecture documentation to gather information all the way down to the specific payload. Retailers that want real-time integrations will also need real-time monitoring of those integrations to ensure service-level agreements are maintained. Part 3 looks at marketing.

    Read the article

  • reset root password in mysql without access to mysql table

    - by Rik89
    I am having an issue on OS X 10.7.5 as I used to use MAMP but for .htaccess issues I am now using my own compiled local server from a long time ago, the problem is i forgot the root password for mysql. I have tried updating the password through terminal using mysql -u root, but I get this error message - ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: NO) Thanks Ric

    Read the article

  • NSNumberFormatter customize?

    - by Frederick C. Lee
    I wish to use NSNumberFormatter to merely attached a percent ('%') to the supplied number WITHOUT having it multiplied by 100. The canned kCFNumberFormatterPercentStyle automatically x100 which I don't want. For example, converting 5.0 to 5.0% versus 500%. Using the following: NSNumberFormatter *percentFormatter = [[NSNumberFormatter alloc] init]; [percentFormatter setNumberFormat:@"##0.00%;-##0.00%"]; But 'setNumberFormat' doesn't exist in NSNumberFomatter. I need to use this NSNumberFormatter for my Core-Plot label. How can I customize NSNumberFormat? Ric.

    Read the article

  • How to integrate an dynamically generated JSON Object into an other object?

    - by Marco Ciarfaglia
    How can I put this JSON Object in the function or object below? // this function generates an JSON Object dynamically $(".n_ListTitle").each(function(i, v) { var node = $(this); var nodeParent = node.parent(); var nodeText = node.text(); var nodePrice = node.siblings('.n_ListPrice'); var prodPrice = $(nodePrice).text(); var prodId = nodeParent.attr('id').replace('ric', ''); var prodTitle = nodeText; var json = { id : prodId, price : prodPrice, currency : "CHF", name : prodTitle }; return json; }); TDConf.Config = { products : [ // here should be inserted the JSON Object {id: "[product-id1]", price:"[price1]", currency:"[currency1]", name:"[product-name1]"}, {id: "[product-id2]", price:"[price2]", currency:"[currency2]", name:"[product-name2]"}, ... })], containerTagId :"..." }; If it is not understandable please ask :) Thanks in advance for helping me to figure out!

    Read the article

  • Top-Rated JavaScript Blogs

    - by Andreas Grech
    I am currently trying to find some blogs that talk (almost solely) on the JavaScript Language, and this is due to the fact that most of the time, bloggers with real life experience at work or at home development can explain more clearly and concisely certain quirks and hidden features than most 'Official Language Specifications' Below find a list of blogs that are JavaScript based (will update the list as more answers flow in): DHTML Kitchen, by Garrett Smith Robert's Talk, by Robert Nyman EJohn, by John Resig (of jQuery) Crockford's JavaScript Page, by Douglas Crockford Dean.edwards.name, by Dean Edwards Ajaxian, by various (@Martin) The JavaScript Weblog, by various SitePoint's JavaScript and CSS Page, by various AjaxBlog, by various Eric Lippert's Blog, by Eric Lippert (talks about JScript and JScript.Net) Web Bug Track, by various (@scunliffe) The Strange Zen Of JavaScript , by Scott Andrew Alex Russell (of Dojo) (@Eran Galperin) Ariel Flesler (@Eran Galperin) Nihilogic, by Jacob Seidelin (@llimllib) Peter's Blog, by Peter Michaux (@Borgar) Flagrant Badassery, by Steve Levithan (@Borgar) ./with Imagination, by Dustin Diaz (@Borgar) HedgerWow (@Borgar) Dreaming in Javascript, by Nosredna spudly.shuoink.com, by Stephen Sorensen Yahoo! User Interface Blog, by various (@Borgar) remy sharp's b:log, by Remy Sharp (@Borgar) JScript Blog, by the JScript Team (@Borgar) Dmitry Baranovskiy’s Web Log, by Dmitry Baranovskiy James Padolsey's Blog (@Kenny Eliasson) Perfection Kills; Exploring JavaScript by example, by Juriy Zaytsev DailyJS (@Ric) NCZOnline (@Kenny Eliasson), by Nicholas C. Zakas Which top-rated blogs am I currently missing from the above list, that you think should be imperative to any JavaScript developer to read (and follow) concurrently?

    Read the article

  • My First F# program

    - by sudaly
    Hi I just finish writing my first F# program. Functionality wise the code works the way I wanted, but not sure if the code is efficient. I would much appreciate if someone could review the code for me and point out the areas where the code can be improved. Thanks Sudaly open System open System.IO open System.IO.Pipes open System.Text open System.Collections.Generic open System.Runtime.Serialization [<DataContract>] type Quote = { [<field: DataMember(Name="securityIdentifier") >] RicCode:string [<field: DataMember(Name="madeOn") >] MadeOn:DateTime [<field: DataMember(Name="closePrice") >] Price:float } let m_cache = new Dictionary<string, Quote>() let ParseQuoteString (quoteString:string) = let data = Encoding.Unicode.GetBytes(quoteString) let stream = new MemoryStream() stream.Write(data, 0, data.Length); stream.Position <- 0L let ser = Json.DataContractJsonSerializer(typeof<Quote array>) let results:Quote array = ser.ReadObject(stream) :?> Quote array results let RefreshCache quoteList = m_cache.Clear() quoteList |> Array.iter(fun result->m_cache.Add(result.RicCode, result)) let EstablishConnection() = let pipeServer = new NamedPipeServerStream("testpipe", PipeDirection.InOut, 4) let mutable sr = null printfn "[F#] NamedPipeServerStream thread created, Wait for a client to connect" pipeServer.WaitForConnection() printfn "[F#] Client connected." try // Stream for the request. sr <- new StreamReader(pipeServer) with | _ as e -> printfn "[F#]ERROR: %s" e.Message sr while true do let sr = EstablishConnection() // Read request from the stream. printfn "[F#] Ready to Receive data" sr.ReadLine() |> ParseQuoteString |> RefreshCache printfn "[F#]Quot Size, %d" m_cache.Count let quot = m_cache.["MSFT.OQ"] printfn "[F#]RIC: %s" quot.RicCode printfn "[F#]MadeOn: %s" (String.Format("{0:T}",quot.MadeOn)) printfn "[F#]Price: %f" quot.Price

    Read the article

  • Cross-platform, human-readable, du on root partition that truly ignores other filesystems

    - by nice_line
    I hate this so much: Linux builtsowell 2.6.18-274.7.1.el5 #1 SMP Mon Oct 17 11:57:14 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux df -kh Filesystem Size Used Avail Use% Mounted on /dev/mapper/mpath0p2 8.8G 8.7G 90M 99% / /dev/mapper/mpath0p6 2.0G 37M 1.9G 2% /tmp /dev/mapper/mpath0p3 5.9G 670M 4.9G 12% /var /dev/mapper/mpath0p1 494M 86M 384M 19% /boot /dev/mapper/mpath0p7 7.3G 187M 6.7G 3% /home tmpfs 48G 6.2G 42G 14% /dev/shm /dev/mapper/o10g.bin 25G 7.4G 17G 32% /app/SIP/logs /dev/mapper/o11g.bin 25G 11G 14G 43% /o11g tmpfs 4.0K 0 4.0K 0% /dev/vx lunmonster1q:/vol/oradb_backup/epmxs1q1 686G 507G 180G 74% /rpmqa/backup lunmonster1q:/vol/oradb_redo/bisxs1q1 4.0G 1.6G 2.5G 38% /bisxs1q/rdoctl1 lunmonster1q:/vol/oradb_backup/bisxs1q1 686G 507G 180G 74% /bisxs1q/backup lunmonster1q:/vol/oradb_exp/bisxs1q1 2.0T 1.1T 984G 52% /bisxs1q/exp lunmonster2q:/vol/oradb_home/bisxs1q1 10G 174M 9.9G 2% /bisxs1q/home lunmonster2q:/vol/oradb_data/bisxs1q1 52G 5.2G 47G 10% /bisxs1q/oradata lunmonster1q:/vol/oradb_redo/bisxs1q2 4.0G 1.6G 2.5G 38% /bisxs1q/rdoctl2 ip-address1:/vol/oradb_home/cspxs1q1 10G 184M 9.9G 2% /cspxs1q/home ip-address2:/vol/oradb_backup/cspxs1q1 674G 314G 360G 47% /cspxs1q/backup ip-address2:/vol/oradb_redo/cspxs1q1 4.0G 1.5G 2.6G 37% /cspxs1q/rdoctl1 ip-address2:/vol/oradb_exp/cspxs1q1 4.1T 1.5T 2.6T 37% /cspxs1q/exp ip-address2:/vol/oradb_redo/cspxs1q2 4.0G 1.5G 2.6G 37% /cspxs1q/rdoctl2 ip-address1:/vol/oradb_data/cspxs1q1 160G 23G 138G 15% /cspxs1q/oradata lunmonster1q:/vol/oradb_exp/epmxs1q1 2.0T 1.1T 984G 52% /epmxs1q/exp lunmonster2q:/vol/oradb_home/epmxs1q1 10G 80M 10G 1% /epmxs1q/home lunmonster2q:/vol/oradb_data/epmxs1q1 330G 249G 82G 76% /epmxs1q/oradata lunmonster1q:/vol/oradb_redo/epmxs1q2 5.0G 609M 4.5G 12% /epmxs1q/rdoctl2 lunmonster1q:/vol/oradb_redo/epmxs1q1 5.0G 609M 4.5G 12% /epmxs1q/rdoctl1 /dev/vx/dsk/slaxs1q/slaxs1q-vol1 183G 17G 157G 10% /slaxs1q/backup /dev/vx/dsk/slaxs1q/slaxs1q-vol4 173G 58G 106G 36% /slaxs1q/oradata /dev/vx/dsk/slaxs1q/slaxs1q-vol5 75G 952M 71G 2% /slaxs1q/exp /dev/vx/dsk/slaxs1q/slaxs1q-vol2 9.8G 381M 8.9G 5% /slaxs1q/home /dev/vx/dsk/slaxs1q/slaxs1q-vol6 4.0G 1.6G 2.2G 42% /slaxs1q/rdoctl1 /dev/vx/dsk/slaxs1q/slaxs1q-vol3 4.0G 1.6G 2.2G 42% /slaxs1q/rdoctl2 /dev/mapper/appoem 30G 1.3G 27G 5% /app/em Yet, I equally, if not quite a bit more, also hate this: SunOS solarious 5.10 Generic_147440-19 sun4u sparc SUNW,SPARC-Enterprise Filesystem size used avail capacity Mounted on kiddie001Q_rpool/ROOT/s10s_u8wos_08a 8G 7.7G 1.3G 96% / /devices 0K 0K 0K 0% /devices ctfs 0K 0K 0K 0% /system/contract proc 0K 0K 0K 0% /proc mnttab 0K 0K 0K 0% /etc/mnttab swap 15G 1.8M 15G 1% /etc/svc/volatile objfs 0K 0K 0K 0% /system/object sharefs 0K 0K 0K 0% /etc/dfs/sharetab fd 0K 0K 0K 0% /dev/fd kiddie001Q_rpool/ROOT/s10s_u8wos_08a/var 31G 8.3G 6.6G 56% /var swap 512M 4.6M 507M 1% /tmp swap 15G 88K 15G 1% /var/run swap 15G 0K 15G 0% /dev/vx/dmp swap 15G 0K 15G 0% /dev/vx/rdmp /dev/dsk/c3t4d4s0 3 20G 279G 41G 88% /fs_storage /dev/vx/dsk/oracle/ora10g-vol1 292G 214G 73G 75% /o10g /dev/vx/dsk/oec/oec-vol1 64G 33G 31G 52% /oec/runway /dev/vx/dsk/oracle/ora9i-vol1 64G 33G 31G 59% /o9i /dev/vx/dsk/home 23G 18G 4.7G 80% /export/home /dev/vx/dsk/dbwork/dbwork-vol1 292G 214G 73G 92% /db03/wk01 /dev/vx/dsk/oradg/ebusredovol 2.0G 475M 1.5G 24% /u21 /dev/vx/dsk/oradg/ebusbckupvol 200G 32G 166G 17% /u31 /dev/vx/dsk/oradg/ebuscrtlvol 2.0G 475M 1.5G 24% /u20 kiddie001Q_rpool 31G 97K 6.6G 1% /kiddie001Q_rpool monsterfiler002q:/vol/ebiz_patches_nfs/NSA0304 203G 173G 29G 86% /oracle/patches /dev/odm 0K 0K 0K 0% /dev/odm The people with the authority don't rotate logs or delete packages after install in my environment. Standards, remediation, cohesion...all fancy foreign words to me. ============== How am I supposed to deal with / filesystem full issues across multiple platforms that have a devastating number of mounts? On Red Hat el5, du -x apparently avoids traversal into other filesystems. While this may be so, it does not appear to do anything if run from the / directory. On Solaris 10, the equivalent flag is du -d, which apparently packs no surprises, allowing Sun to uphold its legacy of inconvenience effortlessly. (I'm hoping I've just been doing it wrong.) I offer up for sacrifice my Frankenstein's monster. Tell me how ugly it is. Tell me I should download forbidden 3rd party software. Tell me I should perform unauthorized coreutils updates, piecemeal, across 2000 systems, with no single sign-on, no authorized keys, and no network update capability. Then, please help me make this bastard better: pwd / du * | egrep -v "$(echo $(df | awk '{print $1 "\n" $5 "\n" $6}' | \ cut -d\/ -f2-5 | egrep -v "[0-9]|^$|Filesystem|Use|Available|Mounted|blocks|vol|swap")| \ sed 's/ /\|/g')" | egrep -v "proc|sys|media|selinux|dev|platform|system|tmp|tmpfs|mnt|kernel" | \ cut -d\/ -f1-2 | sort -k2 -k1,1nr | uniq -f1 | sort -k1,1n | cut -f2 | xargs du -shx | \ egrep "G|[5-9][0-9]M|[1-9][0-9][0-9]M" My biggest failure and regret is that it still requires a single character edit for Solaris: pwd / du * | egrep -v "$(echo $(df | awk '{print $1 "\n" $5 "\n" $6}' | \ cut -d\/ -f2-5 | egrep -v "[0-9]|^$|Filesystem|Use|Available|Mounted|blocks|vol|swap")| \ sed 's/ /\|/g')" | egrep -v "proc|sys|media|selinux|dev|platform|system|tmp|tmpfs|mnt|kernel" | \ cut -d\/ -f1-2 | sort -k2 -k1,1nr | uniq -f1 | sort -k1,1n | cut -f2 | xargs du -shd | \ egrep "G|[5-9][0-9]M|[1-9][0-9][0-9]M" This will exclude all non / filesystems in a du search from the / directory by basically munging an egrepped df from a second pipe-delimited egrep regex subshell exclusion that is naturally further excluded upon by a third egrep in what I would like to refer to as "the whale." The munge-fest frantically escalates into some xargs du recycling where -x/-d is actually useful, and a final, gratuitous egrep spits out a list of directories that almost feels like an accomplishment: Linux: 54M etc/gconf 61M opt/quest 77M opt 118M usr/ ##===\ 149M etc 154M root 303M lib/modules 313M usr/java ##====\ 331M lib 357M usr/lib64 ##=====\ 433M usr/lib ##========\ 1.1G usr/share ##=======\ 3.2G usr/local ##========\ 5.4G usr ##<=============Ascending order to parent 94M app/SIP ##<==\ 94M app ##<=======Were reported as 7gb and then corrected by second du with -x. Solaris: 63M etc 490M bb 570M root/cores.ric.20100415 1.7G oec/archive 1.1G root/packages 2.2G root 1.7G oec Guess what? It's really slow. Edit: Are there any bash one-liner heroes out there than can turn my bloated abomination into divine intervention, or at least something resembling gingerly copypasta?

    Read the article

  • CLSF & CLK 2013 Trip Report by Jeff Liu

    - by jamesmorris
    This is a contributed post from Jeff Liu, lead XFS developer for the Oracle mainline Linux kernel team. Recently, I attended both the China Linux Storage and Filesystem workshop (CLSF), and the China Linux Kernel conference (CLK), which were held in Shanghai. Here are the highlights for both events. CLSF - 17th October XFS update (led by Jeff Liu) XFS keeps rapid progress with a lot of changes, especially focused on the infrastructure/performance improvements as well as  new feature development.  This can be reflected with a sample statistics among XFS/Ext4+JBD2/Btrfs via: # git diff --stat --minimal -C -M v3.7..v3.12-rc4 -- fs/xfs|fs/ext4+fs/jbd2|fs/btrfs XFS: 141 files changed, 27598 insertions(+), 19113 deletions(-) Ext4+JBD2: 39 files changed, 10487 insertions(+), 5454 deletions(-) Btrfs: 70 files changed, 19875 insertions(+), 8130 deletions(-) What made up those changes in XFS? Self-describing metadata(CRC32c). This is a new feature and it contributed about 70% code changes, it can be enabled via `mkfs.xfs -m crc=1 /dev/xxx` for v5 superblock. Transaction log space reservation improvements. With this change, we can calculate the log space reservation at mount time rather than runtime to reduce the the CPU overhead. User namespace support. So both XFS and USERNS can be enabled on kernel configuration begin from Linux 3.10. Thanks Dwight Engen's efforts for this thing. Split project/group quota inodes. Originally, project quota can not be enabled with group quota at the same time because they were share the same quota file inode, now it works but only for v5 super block. i.e, CRC enabled. CONFIG_XFS_WARN, an new lightweight runtime debugger which can be deployed in production environment. Readahead log object recovery, this change can speed up the log replay progress significantly. Speculative preallocation inode tracking, clearing and throttling. The main purpose is to deal with inodes with post-EOF space due to speculative preallocation, support improved quota management to free up a significant amount of unwritten space when at or near EDQUOT. It support backgroup scanning which occurs on a longish interval(5 mins by default, tunable), and on-demand scanning/trimming via ioctl(2). Bitter arguments ensued from this session, especially for the comparison between Ext4 and Btrfs in different areas, I have to spent a whole morning of the 1st day answering those questions. We basically agreed on XFS is the best choice in Linux nowadays because: Stable, XFS has a good record in stability in the past 10 years. Fengguang Wu who lead the 0-day kernel test project also said that he has observed less error than other filesystems in the past 1+ years, I own it to the XFS upstream code reviewer, they always performing serious code review as well as testing. Good performance for large/small files, XFS does not works very well for small files has already been an old story for years. Best choice (maybe) for distributed PB filesystems. e.g, Ceph recommends delopy OSD daemon on XFS because Ext4 has limited xattr size. Best choice for large storage (>16TB). Ext4 does not support a single file more than around 15.95TB. Scalability, any objection to XFS is best in this point? :) XFS is better to deal with transaction concurrency than Ext4, why? The maximum size of the log in XFS is 2038MB compare to 128MB in Ext4. Misc. Ext4 is widely used and it has been proved fast/stable in various loads and scenarios, XFS just need more customers, and Btrfs is still on the road to be a manhood. Ceph Introduction (Led by Li Wang) This a hot topic.  Li gave us a nice introduction about the design as well as their current works. Actually, Ceph client has been included in Linux kernel since 2.6.34 and supported by Openstack since Folsom but it seems that it has not yet been widely deployment in production environment. Their major work is focus on the inline data support to separate the metadata and data storage, reduce the file access time, i.e, a file access need communication twice, fetch the metadata from MDS and then get data from OSD, and also, the small file access is limited by the network latency. The solution is, for the small files they would like to store the data at metadata so that when accessing a small file, the metadata server can push both metadata and data to the client at the same time. In this way, they can reduce the overhead of calculating the data offset and save the communication to OSD. For this feature, they have only run some small scale testing but really saw noticeable improvements. Test environment: Intel 2 CPU 12 Core, 64GB RAM, Ubuntu 12.04, Ceph 0.56.6 with 200GB SATA disk, 15 OSD, 1 MDS, 1 MON. The sequence read performance for 1K size files improved about 50%. I have asked Li and Zheng Yan (the core developer of Ceph, who also worked on Btrfs) whether Ceph is really stable and can be deployed at production environment for large scale PB level storage, but they can not give a positive answer, looks Ceph even does not spread over Dreamhost (subject to confirmation). From Li, they only deployed Ceph for a small scale storage(32 nodes) although they'd like to try 6000 nodes in the future. Improve Linux swap for Flash storage (led by Shaohua Li) Because of high density, low power and low price, flash storage (SSD) is a good candidate to partially replace DRAM. A quick answer for this is using SSD as swap. But Linux swap is designed for slow hard disk storage, so there are a lot of challenges to efficiently use SSD for swap. SWAPOUT swap_map scan swap_map is the in-memory data structure to track swap disk usage, but it is a slow linear scan. It will become a bottleneck while finding many adjacent pages in the use of SSD. Shaohua Li have changed it to a cluster(128K) list, resulting in O(1) algorithm. However, this apporoach needs restrictive cluster alignment and only enabled for SSD. IO pattern In most cases, the swap io is in interleaved pattern because of mutiple reclaimers or a free cluster is shared by all reclaimers. Even though block layer can merge interleaved IO to some extent, but we cannot count on it completely. Hence the per-cpu cluster is added base on the previous change, it can help reclaimer do sequential IO and the block layer will be easier to merge IO. TLB flush: If we're reclaiming one active page, we should first move the page from active lru list to inactive lru list, and then reclaim the page from inactive lru to swap it out. During the process, we need to clear PTE twice: first is 'A'(ACCESS) bit, second is 'P'(PRESENT) bit. Processors need to send lots of ipi which make the TLB flush really expensive. Some works have been done to improve this, including rework smp_call_functiom_many() or remove the first TLB flush in x86, but there still have some arguments here and only parts of works have been pushed to mainline. SWAPIN: Page fault does iodepth=1 sync io, but it's a little waste if only issue a page size's IO. The obvious solution is doing swap readahead. But the current in-kernel swap readahead is arbitary(always 8 pages), and it always doesn't perform well for both random and sequential access workload. Shaohua introduced a new flag for madvise(MADV_WILLNEED) to do swap prefetch, so the changes happen in userspace API and leave the in-kernel readahead unchanged(but I think some improvement can also be done here). SWAP discard As we know, discard is important for SSD write throughout, but the current swap discard implementation is synchronous. He changed it to async discard which allow discard and write run in the same time. Meanwhile, the unit of discard is also optimized to cluster. Misc: lock contention For many concurrent swapout and swapin , the lock contention such as anon_vma or swap_lock is high, so he changed the swap_lock to a per-swap lock. But there still have some lock contention in very high speed SSD because of swapcache address_space lock. Zproject (led by Bob Liu) Bob gave us a very nice introduction about the current memory compression status. Now there are 3 projects(zswap/zram/zcache) which all aim at smooth swap IO storm and promote performance, but they all have their own pros and cons. ZSWAP It is implemented based on frontswap API and it uses a dynamic allocater named Zbud to allocate free pages. Zbud means pairs of zpages are "buddied" and it can only store at most two compressed pages in one page frame, so the max compress ratio is 50%. Each page frame is lru-linked and can do shink in memory pressure. If the compressed memory pool reach its limitation, shink or reclaim happens. It decompress the page frame into two new allocated pages and then write them to real swap device, but it can fail when allocating the two pages. ZRAM Acts as a compressed ramdisk and used as swap device, and it use zsmalloc as its allocator which has high density but may have fragmentation issues. Besides, page reclaim is hard since it will need more pages to uncompress and free just one page. ZRAM is preferred by embedded system which may not have any real swap device. Now both ZRAM and ZSWAP are in driver/staging tree, and in the mm community there are some disscussions of merging ZRAM into ZSWAP or viceversa, but no agreement yet. ZCACHE Handles file page compression but it is removed out of staging recently. From industry (led by Tang Jie, LSI) An LSI engineer introduced several new produces to us. The first is raid5/6 cards that it use full stripe writes to improve performance. The 2nd one he introduced is SandForce flash controller, who can understand data file types (data entropy) to reduce write amplification (WA) for nearly all writes. It's called DuraWrite and typical WA is 0.5. What's more, if enable its Dynamic Logical Capacity function module, the controller can do data compression which is transparent to upper layer. LSI testing shows that with this virtual capacity enables 1x TB drive can support up to 2x TB capacity, but the application must monitor free flash space to maintain optimal performance and to guard against free flash space exhaustion. He said the most useful application is for datebase. Another thing I think it's worth to mention is that a NV-DRAM memory in NMR/Raptor which is directly exposed to host system. Applications can directly access the NV-DRAM via a memory address - using standard system call mmap(). He said that it is very useful for database logging now. This kind of NVM produces are beginning to appear in recent years, and it is said that Samsung is building a research center in China for related produces. IMHO, NVM will bring an effect to current os layer especially on file system, e.g. its journaling may need to redesign to fully utilize these nonvolatile memory. OCFS2 (led by Canquan Shen) Without a doubt, HuaWei is the biggest contributor to OCFS2 in the past two years. They have posted 46 upstream patches and 39 patches have been merged. Their current project is based on 32/64 nodes cluster, but they also tried 128 nodes at the experimental stage. The major work they are working is to support ATS (atomic test and set), it can be works with DLM at the same time. Looks this idea is inspired by the vmware VMFS locking, i.e, http://blogs.vmware.com/vsphere/2012/05/vmfs-locking-uncovered.html CLK - 18th October 2013 Improving Linux Development with Better Tools (Andi Kleen) This talk focused on how to find/solve bugs along with the Linux complexity growing. Generally, we can do this with the following kind of tools: Static code checkers tools. e.g, sparse, smatch, coccinelle, clang checker, checkpatch, gcc -W/LTO, stanse. This can help check a lot of things, simple mistakes, complex problems, but the challenges are: some are very slow, false positives, may need a concentrated effort to get false positives down. Especially, no static checker I found can follow indirect calls (“OO in C”, common in kernel): struct foo_ops { int (*do_foo)(struct foo *obj); } foo->do_foo(foo); Dynamic runtime checkers, e.g, thread checkers, kmemcheck, lockdep. Ideally all kernel code would come with a test suite, then someone could run all the dynamic checkers. Fuzzers/test suites. e.g, Trinity is a great tool, it finds many bugs, but needs manual model for each syscall. Modern fuzzers around using automatic feedback, but notfor kernel yet: http://taviso.decsystem.org/making_software_dumber.pdf Debuggers/Tracers to understand code, e.g, ftrace, can dump on events/oops/custom triggers, but still too much overhead in many cases to run always during debug. Tools to read/understand source, e.g, grep/cscope work great for many cases, but do not understand indirect pointers (OO in C model used in kernel), give us all “do_foo” instances: struct foo_ops { int (*do_foo)(struct foo *obj); } = { .do_foo = my_foo }; foo>do_foo(foo); That would be great to have a cscope like tool that understands this based on types/initializers XFS: The High Performance Enterprise File System (Jeff Liu) [slides] I gave a talk for introducing the disk layout, unique features, as well as the recent changes.   The slides include some charts to reflect the performances between XFS/Btrfs/Ext4 for small files. About a dozen users raised their hands when I asking who has experienced with XFS. I remembered that when I asked the same question in LinuxCon/Japan, only 3 people raised their hands, but they are Chris Mason, Ric Wheeler, and another attendee. The attendee questions were mainly focused on stability, and comparison with other file systems. Linux Containers (Feng Gao) The speaker introduced us that the purpose for those kind of namespaces, include mount/UTS/IPC/Network/Pid/User, as well as the system API/ABI. For the userspace tools, He mainly focus on the Libvirt LXC rather than us(LXC). Libvirt LXC is another userspace container management tool, implemented as one type of libvirt driver, it can manage containers, create namespace, create private filesystem layout for container, Create devices for container and setup resources controller via cgroup. In this talk, Feng also mentioned another two possible new namespaces in the future, the 1st is the audit, but not sure if it should be assigned to user namespace or not. Another is about syslog, but the question is do we really need it? In-memory Compression (Bob Liu) Same as CLSF, a nice introduction that I have already mentioned above. Misc There were some other talks related to ACPI based memory hotplug, smart wake-affinity in scheduler etc., but my head is not big enough to record all those things. -- Jeff Liu

    Read the article

1