Search Results

Search found 5785 results on 232 pages for 'atomic compare and swap'.

Page 33/232 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Shell Script - comparing lines of text, deleting matches

    - by SirRatty
    Hi all, I've done some searching for this but cannot find what I'm after, specifically. I have two files: "a.txt", "b.txt". Each contains a list of email addresses, separated by newlines. For all lines in "a.txt", I need to check for a match anywhere in "b.txt". If so, the email address in "a.txt" needs to be removed. (Alternatively, a new file "c.txt" could be created with the output if that is easier.) I'm using Mac OS X, so am looking for a shell script that could help, or pointers to how I'd go about constructing the script. Thanks for any help.

    Read the article

  • Fetch the most viewed data in databases

    - by Erik Edgren
    I want to get the most viewed photo from the database but I don't know how I shall accomplish this. Here's my SQL at the moment: SELECT * FROM photos AS p, viewers AS v WHERE p.id = v.id_photo GROUP BY v.id_photo The databases: CREATE TABLE IF NOT EXISTS `photos` ( `id` int(10) NOT NULL AUTO_INCREMENT, `photo_filename` varchar(50) NOT NULL, `photo_camera` varchar(150) NOT NULL, `photo_taken` datetime NOT NULL, `photo_resolution` varchar(10) NOT NULL, `photo_exposure` varchar(10) NOT NULL, `photo_iso` varchar(3) NOT NULL, `photo_fnumber` varchar(10) NOT NULL, `photo_focallength` varchar(10) NOT NULL, `post_coordinates` text NOT NULL, `post_description` text NOT NULL, `post_uploaded` datetime NOT NULL, `post_edited` datetime NOT NULL, `checkbox_approxcoor` enum('0','1') NOT NULL DEFAULT '0', PRIMARY KEY (`id`), UNIQUE KEY `id` (`id`) ) CREATE TABLE IF NOT EXISTS `viewers` ( `id` int(10) NOT NULL AUTO_INCREMENT, `id_photo` int(10) DEFAULT '0', `ipaddress` text NOT NULL, `date_viewed` datetime NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `id` (`id`) ) The data in viewers looks like this: (1, 85, '3892a0ab97d6ff325f285b27b847070f', '2012-06-21 22:49:25'), (2, 84, '3892a0ab97d6ff325f285b27b847070f', '2012-06-21 22:49:25'), (3, 85, '3892a0ab97d6ff325f285b27b847070f', '2012-06-21 22:49:25'); One single row from the database for photos to understand how the rows looks like in this database: (85, 'P1170986.JPG', 'Panasonic DMC-LX3', '2012-06-19 18:00:40', '3968x2232', '10/8000', '80', '50/10', '51/10', '', '', '2012-06-19 18:45:17', '0000-00-00 00:00:00', '0') At the moment the SQL only prints the photo with ID 84. In this case it's wrong - it should print out the photo with ID 85. How can I fix this problem? Thanks in advance.

    Read the article

  • SQL Server 2008: If Multiple Values Set In Other Mutliple Values Set

    - by AJH
    In SQL, is there anyway to accomplish something like this? This is based off a report built in SQL Server Report Builder, where the user can specify multiple text values as a single report parameter. The query for the report grabs all of the values the user selected and stores them in a single variable. I need a way for the query to return only records that have associations to EVERY value the user specified. -- Assume there's a table of Elements with thousands of entries. -- Now we declare a list of properties for those Elements to be associated with. create table #masterTable ( ElementId int, Text varchar(10) ) insert into #masterTable (ElementId, Text) values (1, 'Red'); insert into #masterTable (ElementId, Text) values (1, 'Coarse'); insert into #masterTable (ElementId, Text) values (1, 'Dense'); insert into #masterTable (ElementId, Text) values (2, 'Red'); insert into #masterTable (ElementId, Text) values (2, 'Smooth'); insert into #masterTable (ElementId, Text) values (2, 'Hollow'); -- Element 1 is Red, Coarse, and Dense. Element 2 is Red, Smooth, and Hollow. -- The real table is actually much much larger than this; this is just an example. -- This is me trying to replicate how SQL Server Report Builder treats -- report parameters in its queries. The user selects one, some, all, -- or no properties from a list. The written query treats the user's -- selections as a single variable called @Properties. -- Example scenario 1: User only wants to see Elements that are BOTH Red and Dense. select e.* from Elements e where (@Properties) --ideally a set containing only Red and Dense in (select Text from #masterTable where ElementId = e.Id) --ideally a set containing only Red, Coarse, and Dense --Both Red and Dense are within Element 1's properties (Red, Coarse, Dense), so Element 1 gets returned, but not Element 2. -- Example scenario 2: User only wants to see Elements that are BOTH Red and Hollow. select e.* from Elements e where (@Properties) --ideally a set containing only Red and Hollow in (select Text from #masterTable where ElementId = e.Id) --Both Red and Hollow are within Element 2's properties (Red, Smooth, Hollow), so Element 2 gets returned, but not Element 1. --Example Scenario 3: User only picked the Red option. select e.* from Elements e where (@Properties) --ideally a set containing only Red in (select Text from #masterTable where ElementId = e.Id) --Red is within both Element 1 and Element 2's properties, so both Element 1 and Element 2 get returned. The above syntax doesn't actually work because SQL doesn't seem to allow multiple values on the left side of the "in" comparison. Error that returns: Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression. Am I even on the right track here? Sorry if the example looks long-winded or confusing.

    Read the article

  • NSArray containObjects method

    - by Anthony Chan
    Hi, I have a simple question regarding xcode coding but don't know why things are not performing as I think. I have an array of objects (custom objects). I just want to check if this one is within the array. I used the following code: NSArray *collection = [[NSArray alloc] initWithObjects:A, B, C, nil]; //custom "Item" objects Item *tempItem = [[Fruit alloc] initWithLength:1 width:2 height:3]; //3 instance variables in "Item" objects if([collection containsObject:tempItem]) { NSLog(@"collection contains this item"); } I suppose the above checking will give me a positive result but it's not. Further, I checked whether the objects created are the same. NSLog(@"L:%i W:%i H:%i", itemToCheck.length, itemToCheck.width, itemToCheck.height); for (int i = 0, i < [collection count], i++) { Item *itemInArray = [collection objectAtIndex:i]; NSLog(@"collection contains L:%i W:%i H:%i", itemInArray.length, itemInArray.width, itemInArrayheight); } In the console, this is what I got: L:1 W:2 H:3 collection contains L:0 W:0 H:0 collection contains L:1 W:2 H:3 collection contains L:6 W:8 H:2 Obviously the tempItem is inside the collection array but nothing shows up when I use containsObject: to check it. Could anyone give me some direction which part I am wrong? Thanks a lot!

    Read the article

  • sql - get the latest date of two columns

    - by stacker
    table1 - date1 datetime not null - date2 nvarchar null I want to get the latest date of this two. select date1, date2, (CASE WHEN date1 > CAST(date2 as DateTime) THEN date1 ELSE date2 END) as DateTime) as LatestDate from table1 please note that date2 can be null. in this case, date1 win.

    Read the article

  • Draw a comparison between an integer in a specific row and a count

    - by XCoderX
    This is a follow-up question to this one: Check for specific integer in a row WHERE user = $name I want a user to be able to comment on my site for exactly five times a day. After this five times, the user has to wait 24 hours. In order to accomplish that I raise a counter in my MYSQL database, right next to the user. So where the name of the user is, there is where the counter gets raised. When it reaches 5 it should stop counting and reset after 24 hours. In order to check the time I use a timestamp. I check if the timestamp is older than 24 hours. If that is the case, the counter gets reseted (-5) and the user can comment again. In order to do that, I use the following code, however it never stops at five, my guess is that my comparison is wrong somehow: $counter = "SELECT FROM table VALUES CommentCounterReset WHERE Name = '$name'"; if(!isset($_SESSION['ts'])); { $_SESSION['ts'] = time(); } if ($counter >= 5) { if (time() - $_SESSION['ts'] <= 60*60*24){ echo "You already wrote five comments."; } else { $sql = "UPDATE table SET CommentCounterReset = CommentCounterReset-5 WHERE Name = '$name'"; } } else { $sql = "UPDATE table SET CommentCounterReset = CommentCounterReset+1 WHERE Name = '$name'"; echo "Your comment has been added."; }

    Read the article

  • Wordpress Custom Query

    - by InnateDev
    I have posts that use a custom field for start date and end date. Query_posts returns an array of posts that exist in the category I'm filtering. How do I query posts using this custom field that has date i.e. 03/11/2010 and not the full array. Pagination works on the full array so it returns all posts. I can use an if else to only show the posts newer that today, then pagination doesn't work. Would I have to build a custom mysql query?

    Read the article

  • C# - How to implement multiple comparers for an IComparable<T> class?

    - by Gary Willoughby
    I have a class that implements IComparable. public class MyClass : IComparable<MyClass> { public int CompareTo(MyClass c) { return this.whatever.CompareTo(c.whatever); } etc.. } I then can call the sort method of a generic list of my class List<MyClass> c = new List<MyClass>(); //Add stuff, etc. c.Sort(); and have the list sorted according to my comparer. How do i specify further comparers to sort my collection different ways according to the other properties of MyClass in order to let users sort my collection in a number of different ways?

    Read the article

  • strcmp equivelant for integers (intcmp) in PHP

    - by Chase
    So we got this function in PHP strcmp(string $1,string $2) // returns -1,0, or 1; We Do not however, have an intcmp(); So i created one: function intcmp($a,$b) { if((int)$a == (int)$b)return 0; if((int)$a > (int)$b)return 1; if((int)$a < (int)$b)return -1; } This just feels dirty. What do you all think?

    Read the article

  • What's the fastest way to compare two objects in PHP?

    - by johnnietheblack
    Let's say I have an object - a User object in this case - and I'd like to be able to track changes to with a separate class. The User object should not have to change it's behavior in any way for this to happen. Therefore, my separate class creates a "clean" copy of it, stores it somewhere locally, and then later can compare the User object to the original version to see if anything changed during its lifespan. Is there a function, a pattern, or anything that can quickly compare the two versions of the User object? Option 1 Maybe I could serialize each version, and directly compare, or hash them and compare? Option 2 Maybe I should simply create a ReflectionClass, run through each of the properties of the class and see if the two versions have the same property values? Option 3 Maybe there is a simple native function like objects_are_equal($object1,$object2);? What's the fastest way to do this?

    Read the article

  • comparing value with array value problem Javascript

    - by Java starter
    This code is what I use now, it dos not work when I trie to use an array to compate values. If anybody has any idea of why, please respond. <html> <head> <script type-'text/javascript'> function hovedFunksjon() { //alert("test av funksjon fungerte"); //alert(passordLager); window.open("index10.html","Window1","menubar=no,width=430,height=360,toolbar=no"); } function inArray(array, value) { for (var i = 0; i < array.length; i++) { if (array[i] == value) return true; } return false; } function spørOmPassord() { var passordLager = ["pass0","pass1","pass2"]; window.passordInput = prompt("password");//Ved å bruke "window." skaper man en global variabel //if (passordInput == passordLager[0] || passordLager[1] || passordLager[2]) if (inArray(passordLager,passorInput) ) { hovedFunksjon(); } else { alert("Feil passord"); //href="javascript:self.close()">close window } } function changeBackgroundColor() { //document.bgColor="#CC9900"; //document.bgColor="YELLOW" document.bgColor="BLACK" } </script> </head> <body> <script type-'text/javascript'> changeBackgroundColor(); </script> <div align="center"> <form> <input type = "button" value = "Logg inn" onclick="spørOmPassord()"> </form> </div> </body> </html>

    Read the article

  • C# GroupJoin effectiveness

    - by bsnote
    without using GroupJoin: var playersDictionary = players.ToDictionary(player => player.Id, element => new PlayerDto { Rounds = new List<RoundDto>() }); foreach (var round in rounds) { PlayerDto playerDto; playersDictionary.TryGetValue(round.PlayerId, out playerDto); if (playerDto != null) { playerDto.Rounds.Add(new RoundDto { }); } } var playerDtoItems = playersDictionary.Values; using GroupJoin: var playerDtoItems = from player in players join round in rounds on player.Id equals round.PlayerId into playerRounds select new PlayerDto { Rounds = playerRounds.Select(playerRound => new RoundDto {}) }; Which of these two pieces is more efficient?

    Read the article

  • How to build SVN/Git like Diff in WebApp?

    - by 01
    I have XMLs(or Objects) that represents data at some point in a business process. I would like to be able to see what has changed between step1 and step5(two versions of the same XML or Object). Id like to implement this like diff function in version control system. how to do it in web app? P.S. I dont want to just store those files in VCS and than make it do the diff. However if I could somehow emulate VCS without having one that would be cool. P.S. I know there are some JS frameworks that offer diff functionality, but the XML could have 10MB, so I think it should be dont at server side.

    Read the article

  • Which is the best sql schema comparison tool for Oracle?

    - by mike g
    It should be a tool to enable versioning of a database schema and efficiently updating databases with older versions of the schema: robustness, does it handle all edge cases support for data migration command line execution flexibility, can some data be compared as well In the answers a breakdown on support for these points (and anything I may have missed) would be appreciated.

    Read the article

  • C# GroupJoin efficiency

    - by bsnote
    without using GroupJoin: var playersDictionary = players.ToDictionary(player => player.Id, element => new PlayerDto { Rounds = new List<RoundDto>() }); foreach (var round in rounds) { PlayerDto playerDto; playersDictionary.TryGetValue(round.PlayerId, out playerDto); if (playerDto != null) { playerDto.Rounds.Add(new RoundDto { }); } } var playerDtoItems = playersDictionary.Values; using GroupJoin: var playerDtoItems = from player in players join round in rounds on player.Id equals round.PlayerId into playerRounds select new PlayerDto { Rounds = playerRounds.Select(playerRound => new RoundDto {}) }; Which of these two pieces is more efficient?

    Read the article

  • How can I get Ruby to treat the index of a string as a character (rather than the ASCII code)?

    - by user336777
    I am checking to see if the last character in a directory path is a '/'. How do you get ruby to treat the specific index of a string as a character rather than the associated ASCII code? For example the following always returns false: dir[dir.length - 1] == '/' This is because dir[dir.length - 1] returns the ASCII code 47 (rather than '/'). Any thoughts on how to interpret 47 as '/'? Or is there a completely different way to handle this in the first place? thanks.

    Read the article

  • Language D compared to C++?

    - by Henrik
    Anyone here with experience of the language D? I was just reading through its presentation at http://www.d-programming-language.org/ and at Wikipedia and it seems like a good (better) alternative to C++. With good/better I mean that it seem simpler yet it has all the good stuff within C/C++. But without some of the difficulties that make C++ more difficult to learn and use in a good and efficient way. So anyone with comments/experience with the language D? Performance?

    Read the article

  • When using MVVM, should you create new viewmodels, or swap out the models?

    - by ConditionRacer
    Say I have a viewmodel like this: public class EmployeeViewModel { private EmployeeModel _model; public Color BackgroundColor { get; set; } public Name { get { return _model.Name; } set { _model.Name = value; NotifyPropertyChanged(Name); } } } So this viewmodel binds to a view that displays an employee. The thing to think about is, does this viewmodel represent an employee, or a "displayable" employee. The viewmodel contains some things that are view specific, for instance the background color. There can be many employees, but only one employee view. With this in mind, when changing the displayed employee, does it make sense to create a new EmployeeViewModel and rebind to the view, or simply swap out the EmployeeModel. Is the distinction even important, or is it a matter of style? I've always leaned toward creating new viewmodels, but I am working on a project where the viewmodels are created once and the models are swapped out. I'm not sure how I feel about this, though it seems to work fine.

    Read the article

  • Is single float assignment an atomic operation on the iPhone?

    - by iter
    I assume that on a 32-bit device like the iPhone, assigning a short float is an atomic, thread-safe operation. I want to make sure it is. I have a C function that I want to call from an Objective-C thread, and I don't want to acquire a lock before calling it: void setFloatValue(float value) { globalFloat = value; }

    Read the article

  • CLSF & CLK 2013 Trip Report by Jeff Liu

    - by jamesmorris
    This is a contributed post from Jeff Liu, lead XFS developer for the Oracle mainline Linux kernel team. Recently, I attended both the China Linux Storage and Filesystem workshop (CLSF), and the China Linux Kernel conference (CLK), which were held in Shanghai. Here are the highlights for both events. CLSF - 17th October XFS update (led by Jeff Liu) XFS keeps rapid progress with a lot of changes, especially focused on the infrastructure/performance improvements as well as  new feature development.  This can be reflected with a sample statistics among XFS/Ext4+JBD2/Btrfs via: # git diff --stat --minimal -C -M v3.7..v3.12-rc4 -- fs/xfs|fs/ext4+fs/jbd2|fs/btrfs XFS: 141 files changed, 27598 insertions(+), 19113 deletions(-) Ext4+JBD2: 39 files changed, 10487 insertions(+), 5454 deletions(-) Btrfs: 70 files changed, 19875 insertions(+), 8130 deletions(-) What made up those changes in XFS? Self-describing metadata(CRC32c). This is a new feature and it contributed about 70% code changes, it can be enabled via `mkfs.xfs -m crc=1 /dev/xxx` for v5 superblock. Transaction log space reservation improvements. With this change, we can calculate the log space reservation at mount time rather than runtime to reduce the the CPU overhead. User namespace support. So both XFS and USERNS can be enabled on kernel configuration begin from Linux 3.10. Thanks Dwight Engen's efforts for this thing. Split project/group quota inodes. Originally, project quota can not be enabled with group quota at the same time because they were share the same quota file inode, now it works but only for v5 super block. i.e, CRC enabled. CONFIG_XFS_WARN, an new lightweight runtime debugger which can be deployed in production environment. Readahead log object recovery, this change can speed up the log replay progress significantly. Speculative preallocation inode tracking, clearing and throttling. The main purpose is to deal with inodes with post-EOF space due to speculative preallocation, support improved quota management to free up a significant amount of unwritten space when at or near EDQUOT. It support backgroup scanning which occurs on a longish interval(5 mins by default, tunable), and on-demand scanning/trimming via ioctl(2). Bitter arguments ensued from this session, especially for the comparison between Ext4 and Btrfs in different areas, I have to spent a whole morning of the 1st day answering those questions. We basically agreed on XFS is the best choice in Linux nowadays because: Stable, XFS has a good record in stability in the past 10 years. Fengguang Wu who lead the 0-day kernel test project also said that he has observed less error than other filesystems in the past 1+ years, I own it to the XFS upstream code reviewer, they always performing serious code review as well as testing. Good performance for large/small files, XFS does not works very well for small files has already been an old story for years. Best choice (maybe) for distributed PB filesystems. e.g, Ceph recommends delopy OSD daemon on XFS because Ext4 has limited xattr size. Best choice for large storage (>16TB). Ext4 does not support a single file more than around 15.95TB. Scalability, any objection to XFS is best in this point? :) XFS is better to deal with transaction concurrency than Ext4, why? The maximum size of the log in XFS is 2038MB compare to 128MB in Ext4. Misc. Ext4 is widely used and it has been proved fast/stable in various loads and scenarios, XFS just need more customers, and Btrfs is still on the road to be a manhood. Ceph Introduction (Led by Li Wang) This a hot topic.  Li gave us a nice introduction about the design as well as their current works. Actually, Ceph client has been included in Linux kernel since 2.6.34 and supported by Openstack since Folsom but it seems that it has not yet been widely deployment in production environment. Their major work is focus on the inline data support to separate the metadata and data storage, reduce the file access time, i.e, a file access need communication twice, fetch the metadata from MDS and then get data from OSD, and also, the small file access is limited by the network latency. The solution is, for the small files they would like to store the data at metadata so that when accessing a small file, the metadata server can push both metadata and data to the client at the same time. In this way, they can reduce the overhead of calculating the data offset and save the communication to OSD. For this feature, they have only run some small scale testing but really saw noticeable improvements. Test environment: Intel 2 CPU 12 Core, 64GB RAM, Ubuntu 12.04, Ceph 0.56.6 with 200GB SATA disk, 15 OSD, 1 MDS, 1 MON. The sequence read performance for 1K size files improved about 50%. I have asked Li and Zheng Yan (the core developer of Ceph, who also worked on Btrfs) whether Ceph is really stable and can be deployed at production environment for large scale PB level storage, but they can not give a positive answer, looks Ceph even does not spread over Dreamhost (subject to confirmation). From Li, they only deployed Ceph for a small scale storage(32 nodes) although they'd like to try 6000 nodes in the future. Improve Linux swap for Flash storage (led by Shaohua Li) Because of high density, low power and low price, flash storage (SSD) is a good candidate to partially replace DRAM. A quick answer for this is using SSD as swap. But Linux swap is designed for slow hard disk storage, so there are a lot of challenges to efficiently use SSD for swap. SWAPOUT swap_map scan swap_map is the in-memory data structure to track swap disk usage, but it is a slow linear scan. It will become a bottleneck while finding many adjacent pages in the use of SSD. Shaohua Li have changed it to a cluster(128K) list, resulting in O(1) algorithm. However, this apporoach needs restrictive cluster alignment and only enabled for SSD. IO pattern In most cases, the swap io is in interleaved pattern because of mutiple reclaimers or a free cluster is shared by all reclaimers. Even though block layer can merge interleaved IO to some extent, but we cannot count on it completely. Hence the per-cpu cluster is added base on the previous change, it can help reclaimer do sequential IO and the block layer will be easier to merge IO. TLB flush: If we're reclaiming one active page, we should first move the page from active lru list to inactive lru list, and then reclaim the page from inactive lru to swap it out. During the process, we need to clear PTE twice: first is 'A'(ACCESS) bit, second is 'P'(PRESENT) bit. Processors need to send lots of ipi which make the TLB flush really expensive. Some works have been done to improve this, including rework smp_call_functiom_many() or remove the first TLB flush in x86, but there still have some arguments here and only parts of works have been pushed to mainline. SWAPIN: Page fault does iodepth=1 sync io, but it's a little waste if only issue a page size's IO. The obvious solution is doing swap readahead. But the current in-kernel swap readahead is arbitary(always 8 pages), and it always doesn't perform well for both random and sequential access workload. Shaohua introduced a new flag for madvise(MADV_WILLNEED) to do swap prefetch, so the changes happen in userspace API and leave the in-kernel readahead unchanged(but I think some improvement can also be done here). SWAP discard As we know, discard is important for SSD write throughout, but the current swap discard implementation is synchronous. He changed it to async discard which allow discard and write run in the same time. Meanwhile, the unit of discard is also optimized to cluster. Misc: lock contention For many concurrent swapout and swapin , the lock contention such as anon_vma or swap_lock is high, so he changed the swap_lock to a per-swap lock. But there still have some lock contention in very high speed SSD because of swapcache address_space lock. Zproject (led by Bob Liu) Bob gave us a very nice introduction about the current memory compression status. Now there are 3 projects(zswap/zram/zcache) which all aim at smooth swap IO storm and promote performance, but they all have their own pros and cons. ZSWAP It is implemented based on frontswap API and it uses a dynamic allocater named Zbud to allocate free pages. Zbud means pairs of zpages are "buddied" and it can only store at most two compressed pages in one page frame, so the max compress ratio is 50%. Each page frame is lru-linked and can do shink in memory pressure. If the compressed memory pool reach its limitation, shink or reclaim happens. It decompress the page frame into two new allocated pages and then write them to real swap device, but it can fail when allocating the two pages. ZRAM Acts as a compressed ramdisk and used as swap device, and it use zsmalloc as its allocator which has high density but may have fragmentation issues. Besides, page reclaim is hard since it will need more pages to uncompress and free just one page. ZRAM is preferred by embedded system which may not have any real swap device. Now both ZRAM and ZSWAP are in driver/staging tree, and in the mm community there are some disscussions of merging ZRAM into ZSWAP or viceversa, but no agreement yet. ZCACHE Handles file page compression but it is removed out of staging recently. From industry (led by Tang Jie, LSI) An LSI engineer introduced several new produces to us. The first is raid5/6 cards that it use full stripe writes to improve performance. The 2nd one he introduced is SandForce flash controller, who can understand data file types (data entropy) to reduce write amplification (WA) for nearly all writes. It's called DuraWrite and typical WA is 0.5. What's more, if enable its Dynamic Logical Capacity function module, the controller can do data compression which is transparent to upper layer. LSI testing shows that with this virtual capacity enables 1x TB drive can support up to 2x TB capacity, but the application must monitor free flash space to maintain optimal performance and to guard against free flash space exhaustion. He said the most useful application is for datebase. Another thing I think it's worth to mention is that a NV-DRAM memory in NMR/Raptor which is directly exposed to host system. Applications can directly access the NV-DRAM via a memory address - using standard system call mmap(). He said that it is very useful for database logging now. This kind of NVM produces are beginning to appear in recent years, and it is said that Samsung is building a research center in China for related produces. IMHO, NVM will bring an effect to current os layer especially on file system, e.g. its journaling may need to redesign to fully utilize these nonvolatile memory. OCFS2 (led by Canquan Shen) Without a doubt, HuaWei is the biggest contributor to OCFS2 in the past two years. They have posted 46 upstream patches and 39 patches have been merged. Their current project is based on 32/64 nodes cluster, but they also tried 128 nodes at the experimental stage. The major work they are working is to support ATS (atomic test and set), it can be works with DLM at the same time. Looks this idea is inspired by the vmware VMFS locking, i.e, http://blogs.vmware.com/vsphere/2012/05/vmfs-locking-uncovered.html CLK - 18th October 2013 Improving Linux Development with Better Tools (Andi Kleen) This talk focused on how to find/solve bugs along with the Linux complexity growing. Generally, we can do this with the following kind of tools: Static code checkers tools. e.g, sparse, smatch, coccinelle, clang checker, checkpatch, gcc -W/LTO, stanse. This can help check a lot of things, simple mistakes, complex problems, but the challenges are: some are very slow, false positives, may need a concentrated effort to get false positives down. Especially, no static checker I found can follow indirect calls (“OO in C”, common in kernel): struct foo_ops { int (*do_foo)(struct foo *obj); } foo->do_foo(foo); Dynamic runtime checkers, e.g, thread checkers, kmemcheck, lockdep. Ideally all kernel code would come with a test suite, then someone could run all the dynamic checkers. Fuzzers/test suites. e.g, Trinity is a great tool, it finds many bugs, but needs manual model for each syscall. Modern fuzzers around using automatic feedback, but notfor kernel yet: http://taviso.decsystem.org/making_software_dumber.pdf Debuggers/Tracers to understand code, e.g, ftrace, can dump on events/oops/custom triggers, but still too much overhead in many cases to run always during debug. Tools to read/understand source, e.g, grep/cscope work great for many cases, but do not understand indirect pointers (OO in C model used in kernel), give us all “do_foo” instances: struct foo_ops { int (*do_foo)(struct foo *obj); } = { .do_foo = my_foo }; foo>do_foo(foo); That would be great to have a cscope like tool that understands this based on types/initializers XFS: The High Performance Enterprise File System (Jeff Liu) [slides] I gave a talk for introducing the disk layout, unique features, as well as the recent changes.   The slides include some charts to reflect the performances between XFS/Btrfs/Ext4 for small files. About a dozen users raised their hands when I asking who has experienced with XFS. I remembered that when I asked the same question in LinuxCon/Japan, only 3 people raised their hands, but they are Chris Mason, Ric Wheeler, and another attendee. The attendee questions were mainly focused on stability, and comparison with other file systems. Linux Containers (Feng Gao) The speaker introduced us that the purpose for those kind of namespaces, include mount/UTS/IPC/Network/Pid/User, as well as the system API/ABI. For the userspace tools, He mainly focus on the Libvirt LXC rather than us(LXC). Libvirt LXC is another userspace container management tool, implemented as one type of libvirt driver, it can manage containers, create namespace, create private filesystem layout for container, Create devices for container and setup resources controller via cgroup. In this talk, Feng also mentioned another two possible new namespaces in the future, the 1st is the audit, but not sure if it should be assigned to user namespace or not. Another is about syslog, but the question is do we really need it? In-memory Compression (Bob Liu) Same as CLSF, a nice introduction that I have already mentioned above. Misc There were some other talks related to ACPI based memory hotplug, smart wake-affinity in scheduler etc., but my head is not big enough to record all those things. -- Jeff Liu

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >