Search Results

Search found 11130 results on 446 pages for 'solaris 11'.

Page 382/446 | < Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >

  • Efficiency of data structures in C99 (possibly affected by endianness)

    - by Ninefingers
    Hi All, I have a couple of questions that are all inter-related. Basically, in the algorithm I am implementing a word w is defined as four bytes, so it can be contained whole in a uint32_t. However, during the operation of the algorithm I often need to access the various parts of the word. Now, I can do this in two ways: uint32_t w = 0x11223344; uint8_t a = (w & 0xff000000) >> 24; uint8_t b = (w & 0x00ff0000) >> 16; uint8_t b = (w & 0x0000ff00) >> 8; uint8_t d = (w & 0x000000ff); However, part of me thinks that isn't particularly efficient. I thought a better way would be to use union representation like so: typedef union { struct { uint8_t d; uint8_t c; uint8_t b; uint8_t a; }; uint32_t n; } word32; Using this method I can assign word32 w = 0x11223344; then I can access the various parts as I require (w.a=11 in little endian). However, at this stage I come up against endianness issues, namely, in big endian systems my struct is defined incorrectly so I need to re-order the word prior to it being passed in. This I can do without too much difficulty. My question is, then, is the first part (various bitwise ands and shifts) efficient compared to the implementation using a union? Is there any difference between the two generally? Which way should I go on a modern, x86_64 processor? Is endianness just a red herring here? I could inspect the assembly output of course, but my knowledge of compilers is not brilliant. I would have thought a union would be more efficient as it would essentially convert to memory offsets, like so: mov eax, [r9+8] Would a compiler realise that is what happening in the bit-shift case above? If it matters, I'm using C99, specifically my compiler is clang (llvm). Thanks in advance.

    Read the article

  • Volunteer for a potential employer?

    - by EoRaptor013
    I've been looking for work since March, and haven't had much luck. Recently, however, I interviewed with a small company near my home for a C#, .NET, SQL development position. I hit it off very well with the hiring manager during the phone screen, and even more so during the face to face. Unfortunately, I failed the practical test: wiring up a web form, creating a couple of SQL stored procedures, saving new data with validation, and creating a minimal search screen. I knew what I was doing, but I was too slow to meet their standards as all the work needed to be done within an hour. Nevertheless, I really liked the place, the environment, the people who I would have been working with, and the boss. (I gave the company an 11 on Joel's 12 point scale.) So, the obvious next step was to scrape the rust off. I've been trying to create little projects for myself, but I don't know that I've been effective in getting any faster. What with all that goes into creating a project, I'm not heads-down coding as much as I think I need. Now, with all that introduction, here's the question. I have been thinking about calling the hiring manager at that place, and asking him to let me volunteer for three or four weeks, with no strings attached. I think it would benefit me, and wouldn't cost him anything (as long as I didn't slow the existing people down!). At the end of that period, he might, or might not, be inclined to hire me, but even if not, I would have had as much as 160 hours of in the trenches development. Maybe not all shiny, but no more rust, I would think. Does this plan make any sense at all? I certainly don't want to sound desperate (although, I'm not far from being there), and I very much need the tuneup, lube, and change the oil. What's the downside, if any, to me doing this? Do any of you see red flags going up—either from the prerspective of the hiring manager, or from the perspective of a developer?

    Read the article

  • How to optimize this mysql query - explain output included

    - by Sandeepan Nath
    This is the query (a search query basically, based on tags):- select SUM(DISTINCT(ttagrels.id_tag in (2105,2120,2151,2026,2046) )) as key_1_total_matches, td.*, u.* from Tutors_Tag_Relations AS ttagrels Join Tutor_Details AS td ON td.id_tutor = ttagrels.id_tutor JOIN Users as u on u.id_user = td.id_user where (ttagrels.id_tag in (2105,2120,2151,2026,2046)) group by td.id_tutor HAVING key_1_total_matches = 1 And following is the database dump needed to execute this query:- CREATE TABLE IF NOT EXISTS `Users` ( `id_user` int(10) unsigned NOT NULL auto_increment, `id_group` int(11) NOT NULL default '0', PRIMARY KEY (`id_user`), KEY `Users_FKIndex1` (`id_group`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=730 ; INSERT INTO `Users` (`id_user`, `id_group`) VALUES (303, 1); CREATE TABLE IF NOT EXISTS `Tutor_Details` ( `id_tutor` int(10) unsigned NOT NULL auto_increment, `id_user` int(10) NOT NULL default '0', PRIMARY KEY (`id_tutor`), KEY `Users_FKIndex1` (`id_user`), KEY `id_user` (`id_user`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=58 ; INSERT INTO `Tutor_Details` (`id_tutor`, `id_user`) VALUES (26, 303); CREATE TABLE IF NOT EXISTS `Tags` ( `id_tag` int(10) unsigned NOT NULL auto_increment, `tag` varchar(255) default NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`), KEY `tag_4` (`tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=2957 ; INSERT INTO `Tags` (`id_tag`, `tag`) VALUES (2026, 'Brendan.\nIn'), (2046, 'Brendan.'), (2105, 'Brendan'), (2120, 'Brendan''s'), (2151, 'Brendan)'); CREATE TABLE IF NOT EXISTS `Tutors_Tag_Relations` ( `id_tag` int(10) unsigned NOT NULL default '0', `id_tutor` int(10) unsigned default NULL, `tutor_field` varchar(255) default NULL, `cdate` timestamp NOT NULL default CURRENT_TIMESTAMP, `udate` timestamp NULL default NULL, KEY `Tutors_Tag_Relations` (`id_tag`), KEY `id_tutor` (`id_tutor`), KEY `id_tag` (`id_tag`), KEY `id_tutor_2` (`id_tutor`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `Tutors_Tag_Relations` (`id_tag`, `id_tutor`, `tutor_field`, `cdate`, `udate`) VALUES (2105, 26, 'firstname', '2010-06-17 17:08:45', NULL); ALTER TABLE `Tutors_Tag_Relations` ADD CONSTRAINT `Tutors_Tag_Relations_ibfk_2` FOREIGN KEY (`id_tutor`) REFERENCES `Tutor_Details` (`id_tutor`) ON DELETE NO ACTION ON UPDATE NO ACTION, ADD CONSTRAINT `Tutors_Tag_Relations_ibfk_1` FOREIGN KEY (`id_tag`) REFERENCES `Tags` (`id_tag`) ON DELETE NO ACTION ON UPDATE NO ACTION; What the query does? This query actually searches tutors which contain "Brendan"(as their name or biography or something). The id_tags 2105,2120,2151,2026,2046 are nothing but the tags which are LIKE "%Brendan%". My question is :- 1.In the explain of this query, the reference column shows NULL for ttagrels, but there are possible keys (Tutors_Tag_Relations,id_tutor,id_tag,id_tutor_2). So, why is no key being taken. How to make the query take references. Is it possible at all? 2. The other two tables td and u are using references. Any indexing needed in those? I think not. Check the explain query output here http://www.test.examvillage.com/explain.png

    Read the article

  • accessing values in a Map container, whose values were passed on as a stream

    - by wilson88
    I am trying to get access to the object values of the objects that were sent as a stream from one class to ano ther.Aparently I can view the objects via their keys but am not so sure how to get to the values.ie Bid- values trdId,qty, price. If possible you can demostrate how I can make comparison for the prices in the containers buyers and sellers for the prices. code is as below: void Auctioneer::printTable(map bidtable) { map<int, Bid*>::const_iterator iter; cout << "\t\tBidID | TradID | Type | Qty | Price \n\n"; for(iter=bidtable.begin(); iter != bidtable.end(); iter++)//{ cout << iter->second->toString() << endl<<"\n"; //------------------------------------------------------------------------- // Creating another map for the sellers. cout<<"These are the Sellers bids\n\n"; map<int, Bid*> sellers(bidtable); sellers.erase(10);sellers.erase(11);sellers.erase(12);sellers.erase(13);sellers.erase(14); sellers.erase(15);sellers.erase(16); sellers.erase(17);sellers.erase(18);sellers.erase(19); for(iter=sellers.begin(); iter != sellers.end(); iter++) cout << iter->second->toString() << endl<<"\n"; //-------------------------------------------------------------------------- // Creating another map for the sellers. cout<<"These are the Buyers bids\n\n"; map<int, Bid*> buyers(bidtable); buyers.erase(0);buyers.erase(1);buyers.erase(2);buyers.erase(3);buyers.erase(4);buyers.erase(5); buyers.erase(6);buyers.erase(7); buyers.erase(8);buyers.erase(9); for(iter=buyers.begin(); iter != buyers.end(); iter++) //sellers.erase(10); cout << iter->second->toString() << endl<<"\n";

    Read the article

  • Are "EXC_BREAKPOINT (SIGTRAP)" exceptions caused by debugging breakpoints?

    - by Dennis
    I have a multithreaded app that is very stable on all my test machines and seems to be stable for almost every one of my users (based on no complaints of crashes). The app crashes frequently for one user, though, who was kind enough to send crash reports. All the crash reports (~10 consecutive reports) look essentially identical: Date/Time: 2010-04-06 11:44:56.106 -0700 OS Version: Mac OS X 10.6.3 (10D573) Report Version: 6 Exception Type: EXC_BREAKPOINT (SIGTRAP) Exception Codes: 0x0000000000000002, 0x0000000000000000 Crashed Thread: 0 Dispatch queue: com.apple.main-thread Thread 0 Crashed: Dispatch queue: com.apple.main-thread 0 com.apple.CoreFoundation 0x90ab98d4 __CFBasicHashRehash + 3348 1 com.apple.CoreFoundation 0x90adf610 CFBasicHashRemoveValue + 1264 2 com.apple.CoreText 0x94e0069c TCFMutableSet::Intersect(__CFSet const*) const + 126 3 com.apple.CoreText 0x94dfe465 TDescriptorSource::CopyMandatoryMatchableRequest(__CFDictionary const*, __CFSet const*) + 115 4 com.apple.CoreText 0x94dfdda6 TDescriptorSource::CopyDescriptorsForRequest(__CFDictionary const*, __CFSet const*, long (*)(void const*, void const*, void*), void*, unsigned long) const + 40 5 com.apple.CoreText 0x94e00377 TDescriptor::CreateMatchingDescriptors(__CFSet const*, unsigned long) const + 135 6 com.apple.AppKit 0x961f5952 __NSFontFactoryWithName + 904 7 com.apple.AppKit 0x961f54f0 +[NSFont fontWithName:size:] + 39 (....more text follows) First, I spent a long time investigating [NSFont fontWithName:size:]. I figured that maybe the user's fonts were screwed up somehow, so that [NSFont fontWithName:size:] was requesting something non-existent and failing for that reason. I added a bunch of code using [[NSFontManager sharedFontManager] availableFontNamesWithTraits:NSItalicFontMask] to check for font availability in advance. Sadly, these changes didn't fix the problem. I've now noticed that I forgot to remove some debugging breakpoints, including _NSLockError, [NSException raise], and objc_exception_throw. However, the app was definitely built using "Release" as the active build configuration. I assume that using the "Release" configuration prevents setting of any breakpoints--but then again I am not sure exactly how breakpoints work or whether the program needs to be run from within gdb for breakpoints to have any effect. My questions are: could my having left the breakpoints set be the cause of the crashes observed by the user? If so, why would the breakpoints cause a problem only for this one user? If not, has anybody else had similar problems with [NSFont fontWithName:size:]? I will probably just try removing the breakpoints and sending back to the user, but I'm not sure how much currency I have left with that user. And I'd like to understand more generally whether leaving the breakpoints set could possibly cause a problem (when the app is built using "Release" configuration).

    Read the article

  • Animated Ellipse

    - by user287798
    Hi, i need someone to help me. I need a xaml with animated ellipses like the ones shown here:http://www.telerik.com/products/silverlight/chart.aspx They are to act as hotspot indicators on my map. What i have below is a xaml that i was trying to do but the circles don't center and have no fed-in/fade-out effect. May somebody help me please. Thanks Sam 1 <UserControl 2 xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" 3 xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" 4 x:Class="SilverlightApplication3.MainPage" 5 Width="640" Height="480"> 6 7 8 <Canvas> 9 10 <Canvas.Triggers> 11 <EventTrigger RoutedEvent="Canvas.Loaded"> 12 <EventTrigger.Actions> 13 <BeginStoryboard> 14 <Storyboard > 15 <DoubleAnimation RepeatBehavior="Forever" 16 Storyboard.TargetName="Pt1" 17 Storyboard.TargetProperty="ScaleX" 18 From="1" 19 To="0.3" 20 Duration="0:0:5" /> 21 </Storyboard> 22 </BeginStoryboard> 23 24 <BeginStoryboard> 25 <Storyboard> 26 <DoubleAnimation RepeatBehavior="Forever" 27 Storyboard.TargetName="Pt1" 28 Storyboard.TargetProperty="ScaleY" 29 From="1" 30 To="0.3" 31 Duration="0:0:5" /> 32 </Storyboard> 33 </BeginStoryboard> 34 <!--<BeginStoryboard> 35 <Storyboard > 36 <DoubleAnimation RepeatBehavior="Forever" 37 Storyboard.TargetName="rect_Copy" 38 Storyboard.TargetProperty="Width" 39 From="30" 40 To="100" 41 Duration="0:0:5" /> 42 </Storyboard> 43 </BeginStoryboard> 44 45 <BeginStoryboard> 46 <Storyboard> 47 <DoubleAnimation RepeatBehavior="Forever" 48 Storyboard.TargetName="rect_Copy" 49 Storyboard.TargetProperty="Height" 50 From="30" 51 To="100" 52 Duration="0:0:5" /> 53 </Storyboard> 54 </BeginStoryboard>--> 55 </EventTrigger.Actions> 56 </EventTrigger> 57 </Canvas.Triggers> 58 59 60 <Ellipse x:Name="rect" Stroke="Green" Width="100" Height="100" Canvas.Left="30" 61 Canvas.Top="29"> 62 <Ellipse.RenderTransform> 63 <ScaleTransform x:Name="Pt1" ScaleX="1" ScaleY="1"/> 64 </Ellipse.RenderTransform> 65 66 </Ellipse> 67 <Ellipse x:Name="rect_Copy" 68 Stroke="Green" 69 Width="30" 70 Height="30" 71 Canvas.Left="65" 72 Canvas.Top="64"/> 73 74 </Canvas> 75 </UserControl>

    Read the article

  • JVM throws OutOfMemory during gc though there are plenty memory left...

    - by Shu L.
    I have my java application configured to use 5G memory. I got an OutOfMemory out of blue. I inspected the gc log and found plenty of memory left: young generation occupies 4% allocated space, tenure generation occupancy is 5% and perm generation is 43%. I am puzzled why JVM throws an OutOfMemory at the gc time. Does anyone know why this is happening? Your help is greatly appreciated. JVM memory and gc settings: -server -Xms5g -Xmx5g -Xss256k -XX:NewSize=2g -XX:MaxNewSize=2g -XX:+UseParallelOldGC -XX:+UseTLAB -XX:SurvivorRatio=8 -XX:TargetSurvivorRatio=90 -XX:+DisableExplicitGC gc.log 2009-09-19T03:34:59.741+0000: 92836.778: [GC Desired survivor size 152567808 bytes, new threshold 1 (max 15) [PSYoungGen: 1941492K-144057K(1947072K)] 3138022K-1340830K(5092800K), 0.1947640 secs] [Times: user=0.61 sys=0.01, real=0.19 secs] 2009-09-19T03:35:29.918+0000: 92866.954: [GC Desired survivor size 152109056 bytes, new threshold 1 (max 15) [PSYoungGen: 1941625K-144049K(1948608K)] 3138398K-1341080K(5094336K), 0.1942000 secs] [Times: user=0.61 sys=0.01, real=0.20 secs] 2009-09-19T03:35:56.883+0000: 92893.920: [GC Desired survivor size 156565504 bytes, new threshold 1 (max 15) [PSYoungGen: 1567994K-115427K(1915072K)] 2765026K-1312820K(5060800K), 0.1586320 secs] [Times: user=0.50 sys=0.01, real=0.16 secs] 2009-09-19T03:35:57.042+0000: 92894.079: [GC Desired survivor size 179961856 bytes, new threshold 1 (max 15) [PSYoungGen: 115427K-0K(1898560K)] 1312820K-1313987K(5044288K), 0.0775650 secs] [Times: user=0.42 sys=0.19, real=0.08 secs] 2009-09-19T03:35:57.120+0000: 92894.157: [Full GC [PSYoungGen: 0K-0K(1898560K)] [ParOldGen: 1313987K-159522K(3145728K)] 1313987K-159522K(5044288K) [PSPermGen: 20025K-19942K(40256K)], 0.56923 00 secs] [Times: user=2.18 sys=0.05, real=0.57 secs] 2009-09-19T03:35:57.690+0000: 92894.726: [GC Desired survivor size 197066752 bytes, new threshold 1 (max 15) [PSYoungGen: 0K-0K(1745728K)] 159522K-159522K(4891456K), 0.0072590 secs] [Times: user=0.01 sys=0.00, real=0.00 secs] 2009-09-19T03:35:57.698+0000: 92894.734: [Full GC [PSYoungGen: 0K-0K(1745728K)] [ParOldGen: 159522K-158627K(3145728K)] 159522K-158627K(4891456K) [PSPermGen: 19942K-19934K(45504K)], 0.3280480 secs] [Times: user=1.46 sys=0.00, real=0.33 secs] Heap PSYoungGen total 1745728K, used 87233K [0x00002aab73650000, 0x00002aabf3650000, 0x00002aabf3650000) eden space 1745664K, 4% used [0x00002aab73650000,0x00002aab78b80778,0x00002aabddf10000) from space 64K, 0% used [0x00002aabddf10000,0x00002aabddf10000,0x00002aabddf20000) to space 192448K, 0% used [0x00002aabe7a60000,0x00002aabe7a60000,0x00002aabf3650000) ParOldGen total 3145728K, used 158627K [0x00002aaab3650000, 0x00002aab73650000, 0x00002aab73650000) object space 3145728K, 5% used [0x00002aaab3650000,0x00002aaabd138d28,0x00002aab73650000) PSPermGen total 45504K, used 19965K [0x00002aaaae250000, 0x00002aaab0ec0000, 0x00002aaab3650000) object space 45504K, 43% used [0x00002aaaae250000,0x00002aaaaf5cf668,0x00002aaab0ec0000) I am on 64-bit Linux and JRE 1.6.0_10: $uname -a Linux x 2.6.24-etchnhalf.1-amd64 #1 SMP Tue Oct 14 03:11:45 UTC 2008 x86_64 GNU/Linux $java -version java version "1.6.0_10" Java(TM) SE Runtime Environment (build 1.6.0_10-b33) Java HotSpot(TM) 64-Bit Server VM (build 11.0-b15, mixed mode)

    Read the article

  • How to merge an improperly created "branch" that isn't really a branch (wasn't created by an svn cop

    - by MatrixFrog
    I'm working on a team with lots of people who are pretty unfamiliar with the concepts of version control systems, and are just kind of doing whatever seems to work, by trial and error. Someone created a "branch" from the trunk that is not ancestrally related to the trunk. My guess is it went something like this: They created a folder in branches. They checked out all the code from the trunk to somewhere on their desktop. They added all that code to the newly created folder as though it was a bunch of brand new files. So the repository isn't aware that all that code is actually just a copy of the trunk. When I look at the history of that branch in TortoiseSVN, and uncheck the "Stop on copy/rename" box, there is no revision that has the trunk (or any other path) under the "Copy from path" column. Then they made lots of changes on their "branch". Meanwhile, others were making lots of changes on the trunk. We tried to do a merge and of course it doesn't work. Because, the trunk and the fake branch are not ancestrally related. I can see only two ways to resolve this: Go through the logs on the "branch", look at every change that was made, and manually apply each change to the trunk. Go through the logs on the trunk, look at every change that was made between revision 540 (when the "branch" was created) and HEAD, and manually apply each change to the "branch". This involves 7 revisions one way or 11 revisions the other way, so neither one is really that terrible. But is there any way to cause the repository to "realize" that the branch really IS ancestrally related even though it was created incorrectly, so that we can take advantage of the built-in merging functionality in Eclipse/TortoiseSVN? (You may be wondering: Why did your company hire these people and allow them to access the SVN repository without making sure they knew how to use it properly first?! We didn't -- this is a school assignment, which is a collaboration between two different classes -- the ones in the lower class were given a very quick hand-wavey "overview" of SVN which didn't really teach them anything. I've asked everyone in the group to please PLEASE read the svn book, and I'll make sure we (the slightly more experienced half of the team) keep a close eye on the repository to ensure this doesn't happen again.)

    Read the article

  • Detect Client Computer name when an RDP session is open

    - by Ubiquitous Che
    Hey all, My manager has pointed out to me a few nifty things that one of our accounting applications can do because it can load different settings based on the machine name of the host and the machine name of the client when the package is opened in an RDP session. We want to provide similar functionality in one of my company's applications. I've found out on this site how to detect if I'm in an RDP session, but I'm having trouble finding information anywhere on how to detect the name of the client computer. Any pointers in the right direction would be great. I'm coding in C# for .NET 3.5 EDIT The sample code I cobbled together from the advice below - it should be enough for anyone who has a use for the WTSQuerySessionInformation to get a feel for what's going on. Note that this isn't necessarily the best way of doing it - just a starting point that I've found useful. When I run this locally, I get boring, expected answers. When I run it on our local office server in an RDP session, I see my own computer name in the WTSClientName property. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Runtime.InteropServices; namespace TerminalServicesTest { class Program { const int WTS_CURRENT_SESSION = -1; static readonly IntPtr WTS_CURRENT_SERVER_HANDLE = IntPtr.Zero; static void Main(string[] args) { StringBuilder sb = new StringBuilder(); uint byteCount; foreach (WTS_INFO_CLASS item in Enum.GetValues(typeof(WTS_INFO_CLASS))) { Program.WTSQuerySessionInformation( WTS_CURRENT_SERVER_HANDLE, WTS_CURRENT_SESSION, item, out sb, out byteCount); Console.WriteLine("{0}({1}): {2}", item.ToString(), byteCount, sb); } Console.WriteLine(); Console.WriteLine("Press any key to exit..."); Console.ReadKey(); } [DllImport("Wtsapi32.dll")] public static extern bool WTSQuerySessionInformation( IntPtr hServer, int sessionId, WTS_INFO_CLASS wtsInfoClass, out StringBuilder ppBuffer, out uint pBytesReturned); } enum WTS_INFO_CLASS { WTSInitialProgram = 0, WTSApplicationName = 1, WTSWorkingDirectory = 2, WTSOEMId = 3, WTSSessionId = 4, WTSUserName = 5, WTSWinStationName = 6, WTSDomainName = 7, WTSConnectState = 8, WTSClientBuildNumber = 9, WTSClientName = 10, WTSClientDirectory = 11, WTSClientProductId = 12, WTSClientHardwareId = 13, WTSClientAddress = 14, WTSClientDisplay = 15, WTSClientProtocolType = 16, WTSIdleTime = 17, WTSLogonTime = 18, WTSIncomingBytes = 19, WTSOutgoingBytes = 20, WTSIncomingFrames = 21, WTSOutgoingFrames = 22, WTSClientInfo = 23, WTSSessionInfo = 24, WTSSessionInfoEx = 25, WTSConfigInfo = 26, WTSValidationInfo = 27, WTSSessionAddressV4 = 28, WTSIsRemoteSession = 29 } }

    Read the article

  • Is it worth using std::tr1 in production?

    - by flashnik
    I'm using MS VC 2008 and for some projects Intel C++ compiler 11.0. Is it worth using tr1 features in production? Will they stay in new standard? For example, now I use stdext::hash_map. TR1 defines std::tr1::unordered_map. But in MS implementation unordered_map is just theirs stdext::hash_map, templatized in another way.

    Read the article

  • Keyboard selecting nested li's with jquery

    - by Joel
    I have a load of nested <ul>'s and <li>'s and I would like to be able to have a hover / selected class on an <li>, and use the keyboard up and down buttons to select up and down on the <li>s.. however they are nested and need to jump across <ul>s if necessary. For instance: <ul> <li class='cat'> cat 1 <ul> <li class='hover'>item 1</li> <li>item 2</li> <li>item 3</li> <li>item 4</li> </ul> </li> <li class='cat'> cat 2 <ul> <li>item 5</li> <li>item 6</li> <li>item 7</li> <li>item 8</li> </ul> <ul class='subcat'> <li class='cat'> Cat 3 <ul> <li>item 9</li> <li>item 10</li> <li>item 11</li> <li>item 12</li> </ul> </li> </ul> </li> <li class='cat'> cat 4 <ul> <li>item 13</li> <li>item 14</li> <li>item 15</li> <li>item 16</li> </ul> </li> </ul> As I press the down key I wish the items to be selected in numerical order (they do not have numerical order IDs and sometimes some of them are hidden so they should be ignored. But it needs to go to the next <li> that isn't a category and set that as hover.

    Read the article

  • Dual-booting Ubuntu and Pardus with GRUB2...Pardus no show?

    - by Ibn Ali al-Turki
    Hello all, I have Ubuntu 10.10 installed and used to dual-boot Fedora, but I replaced Fedora with Pardus. After the install, I went into ubuntu, and did a sudo update-grub. It detected my Pardus 2011 install there. When I rebooted, it did not show up in my grub2 menu however. I went back to Ubuntu and did it again...then checked the grub.cfg, and it is not there. I have read that Pardus uses a grub legacy. How can I get Pardus into my grub2 menu? Thanks! sudo fdisk -l Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xd9b3496e Device Boot Start End Blocks Id System /dev/sda1 * 1 15197 122067968 83 Linux /dev/sda2 36394 60802 196059757 5 Extended /dev/sda3 15197 30394 122067968 83 Linux /dev/sda5 36394 59434 185075308 7 HPFS/NTFS /dev/sda6 59434 60802 10983424 82 Linux swap / Solaris Partition table entries are not in disk order and update-grub Found linux image: /boot/vmlinuz-2.6.35-25-generic Found initrd image: /boot/initrd.img-2.6.35-25-generic Found memtest86+ image: /boot/memtest86+.bin Found Pardus 2011 (2011) on /dev/sda3 Yet after this, I go to grub.cfg, and Pardus is not there.

    Read the article

  • AES code throwing NoSuchPaddingException: Padding NoPaddin unknown

    - by Tom Brito
    The following code is a try to encrypt data using AES with asymmetric key: import java.io.OutputStream; import java.math.BigInteger; import java.security.Key; import java.security.KeyFactory; import java.security.interfaces.RSAPrivateKey; import java.security.interfaces.RSAPublicKey; import java.security.spec.RSAPrivateKeySpec; import java.security.spec.RSAPublicKeySpec; import javax.crypto.Cipher; public class AsyncronousKeyTest { private final Cipher cipher; private final KeyFactory keyFactory; private final RSAPrivateKey privKey; private AsyncronousKeyTest() throws Exception { cipher = Cipher.getInstance("AES/CBC/NoPaddin", "BC"); keyFactory = KeyFactory.getInstance("AES", "BC"); // create the keys // TODO should this numbers be random? RSAPrivateKeySpec privKeySpec = new RSAPrivateKeySpec(new BigInteger( "d46f473a2d746537de2056ae3092c451", 16), new BigInteger("57791d5430d593164082036ad8b29fb1", 16)); privKey = (RSAPrivateKey) keyFactory.generatePrivate(privKeySpec); } public void generateAuthorizationAct(OutputStream outputStream) throws Exception { // TODO Ticket #14 - GenerateAuthorization action KeyFactory keyFactory = KeyFactory.getInstance("AES", "BC"); // TODO should this numbers be random? RSAPublicKeySpec pubKeySpec = new RSAPublicKeySpec(new BigInteger("d46f473a2d746537de2056ae3092c451", 16), new BigInteger("11", 16)); RSAPublicKey pubKey = (RSAPublicKey) keyFactory.generatePublic(pubKeySpec); byte[] data = new byte[] {0x01}; byte[] encrypted = encryptAO(pubKey, data); outputStream.write(encrypted); } /** Encrypt the AuthorizationObject. */ public byte[] encryptAO(Key pubKey, byte[] data) throws Exception { cipher.init(Cipher.ENCRYPT_MODE, pubKey); byte[] cipherText = cipher.doFinal(data); return cipherText; } public byte[] decrypt(byte[] cipherText) throws Exception { cipher.init(Cipher.DECRYPT_MODE, privKey); byte[] decyptedData = cipher.doFinal(cipherText); return decyptedData; } public static void main(String[] args) throws Exception { System.out.println("start"); AsyncronousKeyTest auth = new AsyncronousKeyTest(); auth.generateAuthorizationAct(System.out); System.out.println("done"); } } but at line cipher = Cipher.getInstance(AesEncrypter.getTransformation(), "BC"); it throws NoSuchPaddingException: Padding NoPaddin unknown. What is this? And how to solve?

    Read the article

  • Help with a logic problem

    - by Stradigos
    I'm having a great deal of difficulty trying to figure out the logic behind this problem. I have developed everything else, but I really could use some help, any sort of help, on the part I'm stuck on. Back story: *A group of actors waits in a circle. They "count off" by various amounts. The last few to audition are thought to have the best chance of getting the parts and becoming stars. Instead of actors having names, they are identified by numbers. The "Audition Order" in the table tells, reading left-to-right, the "names" of the actors who will be auditioned in the order they will perform.* Sample output: etc, all the way up to 10. What I have so far: using System; using System.Collections; using System.Text; namespace The_Last_Survivor { class Program { static void Main(string[] args) { //Declare Variables int NumOfActors = 0; System.DateTime dt = System.DateTime.Now; int interval = 3; ArrayList Ring = new ArrayList(10); //Header Console.Out.WriteLine("Actors\tNumber\tOrder"); //Add Actors for (int x = 1; x < 11; x++) { NumOfActors++; Ring.Insert((x - 1), new Actor(x)); foreach (Actor i in Ring) { Console.Out.WriteLine("{0}\t{1}\t{2}", NumOfActors, i, i.Order(interval, x)); } Console.Out.WriteLine("\n"); } Console.In.Read(); } public class Actor { //Variables protected int Number; //Constructor public Actor(int num) { Number = num; } //Order in circle public string Order(int inter, int num) { //Variable string result = ""; ArrayList myArray = new ArrayList(num); //Filling Array for (int i = 0; i < num; i++) myArray.Add(i + 1); //Formula foreach (int element in myArray) { if (element == inter) { result += String.Format(" {0}", element); myArray.RemoveAt(element); } } return result; } //String override public override string ToString() { return String.Format("{0}", Number); } } } } The part I'm stuck on is getting some math going that does this: Can anyone offer some guidance and/or sample code?

    Read the article

  • [JDBCExceptionReporter] SQL Warning in Hibernate

    - by adisembiring
    Hi all, I'm using Hibernate 3.2.1 and database SQLServer2000 while I'm try to insert some data using my dao, some warning occurred like this: java.sql.SQLWarning: [Microsoft][SQLServer 2000 Driver for JDBC]Database changed to BTN_SPP_DB at com.microsoft.jdbc.base.BaseWarnings.createSQLWarning(Unknown Source) at com.microsoft.jdbc.base.BaseWarnings.get(Unknown Source) at com.microsoft.jdbc.base.BaseConnection.getWarnings(Unknown Source) at org.hibernate.util.JDBCExceptionReporter.logAndClearWarnings(JDBCExceptionReporter.java:22) at org.hibernate.jdbc.ConnectionManager.closeConnection(ConnectionManager.java:443) at org.hibernate.jdbc.ConnectionManager.aggressiveRelease(ConnectionManager.java:400) at org.hibernate.jdbc.ConnectionManager.afterTransaction(ConnectionManager.java:287) at org.hibernate.jdbc.JDBCContext.afterTransactionCompletion(JDBCContext.java:221) at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:119) at co.id.hanoman.btnmw.spp.dao.TagihanDao.save(TagihanDao.java:43) at co.id.hanoman.btnmw.spp.dao.TagihanDao.save(TagihanDao.java:1) at co.id.hanoman.btnmw.spp.dao.test.TagihanDaoTest.testSave(TagihanDaoTest.java:81) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.junit.internal.runners.TestMethodRunner.executeMethodBody(TestMethodRunner.java:99) at org.junit.internal.runners.TestMethodRunner.runUnprotected(TestMethodRunner.java:81) at org.junit.internal.runners.BeforeAndAfterRunner.runProtected(BeforeAndAfterRunner.java:34) at org.junit.internal.runners.TestMethodRunner.runMethod(TestMethodRunner.java:75) at org.junit.internal.runners.TestMethodRunner.run(TestMethodRunner.java:45) at org.junit.internal.runners.TestClassMethodsRunner.invokeTestMethod(TestClassMethodsRunner.java:66) at org.junit.internal.runners.TestClassMethodsRunner.run(TestClassMethodsRunner.java:35) at org.junit.internal.runners.TestClassRunner$1.runUnprotected(TestClassRunner.java:42) at org.junit.internal.runners.BeforeAndAfterRunner.runProtected(BeforeAndAfterRunner.java:34) at org.junit.internal.runners.TestClassRunner.run(TestClassRunner.java:52) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:45) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:460) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:673) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:386) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:196) my hibernate initialization is: 2010-04-26 22:54:05,203 INFO [Version] Hibernate Annotations 3.3.0.GA 2010-04-26 22:54:05,234 INFO [Environment] Hibernate 3.2.1 2010-04-26 22:54:05,234 INFO [Environment] hibernate.properties not found 2010-04-26 22:54:05,234 INFO [Environment] Bytecode provider name : cglib 2010-04-26 22:54:05,234 INFO [Environment] using JDK 1.4 java.sql.Timestamp handling 2010-04-26 22:54:05,343 INFO [Configuration] configuring from resource: /hibernate.cfg.xml 2010-04-26 22:54:05,343 INFO [Configuration] Configuration resource: /hibernate.cfg.xml 2010-04-26 22:54:05,406 DEBUG [DTDEntityResolver] trying to resolve system-id [http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd] 2010-04-26 22:54:05,406 DEBUG [DTDEntityResolver] recognized hibernate namespace; attempting to resolve on classpath under org/hibernate/ 2010-04-26 22:54:05,406 DEBUG [DTDEntityResolver] located [http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd] in classpath 2010-04-26 22:54:05,453 DEBUG [Configuration] hibernate.dialect=org.hibernate.dialect.SQLServerDialect 2010-04-26 22:54:05,453 DEBUG [Configuration] hibernate.connection.driver_class=com.microsoft.jdbc.sqlserver.SQLServerDriver 2010-04-26 22:54:05,453 DEBUG [Configuration] hibernate.connection.url=jdbc:microsoft:sqlserver://12.56.11.65:1433;databaseName=BTN_SPP_DB 2010-04-26 22:54:05,453 DEBUG [Configuration] hibernate.connection.username=spp 2010-04-26 22:54:05,453 DEBUG [Configuration] hibernate.connection.password=spp

    Read the article

  • Android Drawable question.

    - by Tarmon
    Hey Everyone, I am trying to create a drawable in code and change the color based on some criteria. I can get it to work but it doesn't want to let me set the padding on the view. Any help would be appreciated. <?xml version="1.0" encoding="utf-8"?> <ImageView android:id="@+id/icon" android:layout_width="50px" android:layout_height="fill_parent" /> <TextView android:id="@+id/label" android:layout_width="wrap_content" android:layout_height="wrap_content" android:paddingLeft="17px" android:textSize="28sp" /> ImageView icon = (ImageView) row.findViewById(R.id.icon); ShapeDrawable mDrawable; int x = 0; int y = 0; int width = 50; int height = 50; float[] outerR = new float[] { 12, 12, 12, 12, 12, 12, 12, 12 }; mDrawable = new ShapeDrawable(new RoundRectShape(outerR, null, null)); mDrawable.setBounds(x, y+height, x + width, y); switch(position){ case 0: mDrawable.getPaint().setColor(0xffff0000); //Red break; case 1: mDrawable.getPaint().setColor(0xffff0000); //Red break; case 2: mDrawable.getPaint().setColor(0xff00c000); //Green break; case 3: mDrawable.getPaint().setColor(0xff00c000); //Green break; case 4: mDrawable.getPaint().setColor(0xff0000ff); //Blue break; case 5: mDrawable.getPaint().setColor(0xff0000ff); //Blue break; case 6: mDrawable.getPaint().setColor(0xff696969); //Gray break; case 7: mDrawable.getPaint().setColor(0xff696969); //Gray break; case 8: mDrawable.getPaint().setColor(0xffffff00); //Yellow break; case 9: mDrawable.getPaint().setColor(0xff8b4513); //Brown break; case 10: mDrawable.getPaint().setColor(0xff8b4513); //Brown break; case 11: mDrawable.getPaint().setColor(0xff8b4513); //Brown break; case 12: mDrawable.getPaint().setColor(0xffa020f0); //Purple break; case 13: mDrawable.getPaint().setColor(0xffff0000); //Red break; case 14: mDrawable.getPaint().setColor(0xffffd700); //Gold break; case 15: mDrawable.getPaint().setColor(0xffff6600); //Orange break; } icon.setBackgroundDrawable(mDrawable); icon.setPadding(5, 5, 5, 5); If I set the padding in XML it just ignores it. Thanks, Rob

    Read the article

  • FASM vc MASM trasnlation problem in mov si, offset msg

    - by Ruben Trancoso
    hi folks, just did my first test with MASM and FASM with the same code (almos) and I falled in trouble. The only difference is that to produce just the 104 bytes I need to write to MBR in FASM I put org 7c00h and in MASM 0h. The problem is on the mov si, offset msg that in the first case transletes it to 44 7C (7c44h) and with masm translates to 44 00 (0044h)! but just when I change org 7c00h to org 0h in MASM. Otherwise it will produce the entire segment from 0 to 7dff. how do I solve it? or in short, how to make MASM produce a binary that begins at 7c00h as it first byte and subsequent jumps remain relative to 7c00h? .model TINY .code org 7c00h ; Boot entry point. Address 07c0:0000 on the computer memory xor ax, ax ; Zero out ax mov ds, ax ; Set data segment to base of RAM jmp start ; Jump to the first byte after DOS boot record data ; ---------------------------------------------------------------------- ; DOS boot record data ; ---------------------------------------------------------------------- brINT13Flag db 90h ; 0002h - 0EH for INT13 AH=42 READ brOEM db 'MSDOS5.0' ; 0003h - OEM name & DOS version (8 chars) brBPS dw 512 ; 000Bh - Bytes/sector brSPC db 1 ; 000Dh - Sectors/cluster brResCount dw 1 ; 000Eh - Reserved (boot) sectors brFATs db 2 ; 0010h - FAT copies brRootEntries dw 0E0h ; 0011h - Root directory entries brSectorCount dw 2880 ; 0013h - Sectors in volume, < 32MB brMedia db 240 ; 0015h - Media descriptor brSPF dw 9 ; 0016h - Sectors per FAT brSPH dw 18 ; 0018h - Sectors per track brHPC dw 2 ; 001Ah - Number of Heads brHidden dd 0 ; 001Ch - Hidden sectors brSectors dd 0 ; 0020h - Total number of sectors db 0 ; 0024h - Physical drive no. db 0 ; 0025h - Reserved (FAT32) db 29h ; 0026h - Extended boot record sig brSerialNum dd 404418EAh ; 0027h - Volume serial number (random) brLabel db 'OSAdventure' ; 002Bh - Volume label (11 chars) brFSID db 'FAT12 ' ; 0036h - File System ID (8 chars) ;------------------------------------------------------------------------ ; Boot code ; ---------------------------------------------------------------------- start: mov si, offset msg call showmsg hang: jmp hang msg db 'Loading...',0 showmsg: lodsb cmp al, 0 jz showmsgd push si mov bx, 0007 mov ah, 0eh int 10h pop si jmp showmsg showmsgd: retn ; ---------------------------------------------------------------------- ; Boot record signature ; ---------------------------------------------------------------------- dw 0AA55h ; Boot record signature END

    Read the article

  • Table filtering in jquery - a more elegant solution please

    - by Neil Burton
    I want to filter certain rows out of a table and am using classes to categorise the rows. The below code enables me to show and hide row data categorised as "QUO" and "CAL" (eventually there will be other categories. Can someone point me towards a more elegant solution, so I don't have to duplicate code for each category as I have below? Thanks! <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <html> <head> <title>Untitled</title> <style> </style> <script src="Javascript/jquery-1.4.2.min.js" type="text/javascript"></script> <script type="text/javascript"> $(document).ready(function () { $("#toggle_ac_cal").click(function() { var checked_status = this.checked; if (checked_status==true) { $(".ac_cal").show() } else { $(".ac_cal").hide() } }); $("#toggle_ac_quo").click(function() { var checked_status = this.checked; if (checked_status==true) { $(".ac_quo").show() } else { $(".ac_quo").hide() } }); }); </script> </head> <body> <input type="checkbox" id="toggle_ac_cal" checked="checked" />CAL<br/> <input type="checkbox" id="toggle_ac_quo" checked="checked" />QUO<br/> <table> <tbody> <tr class="ac_cal"> <td>CAL</td> <td>10 Oct</td> <td>John Barnes</td> </tr> <tr class="ac_cal"> <td>CAL</td> <td>10 Oct</td> <td>Neil Burton</td> </tr> <tr class="ac_quo"> <td>QUO</td> <td>11 Oct</td> <td>Neil Armstrong</td> </tr> </tbody> </table> </body> </html>

    Read the article

  • Android Application Crashel

    - by deewangan
    hello everyone, i am trying to run an application on an android emulator, but it crashes. i am following a howto i don't know what to do, it just crashes. other applications are running fine, can anyone tell me what i am doing wrong.here is the code: public class Finder extends Activity { /** Called when the activity is first created. */ private LocationManager myLocationManager; private LocationListener myLocationListener; private TextView myLatitude, myLongitude; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); myLatitude = (TextView)findViewById(R.id.Latitude); myLongitude = (TextView)findViewById(R.id.Longitude); myLocationManager = (LocationManager)getSystemService(Context.LOCATION_SERVICE); myLocationListener = new MyLocationListener(); myLocationManager.requestLocationUpdates(LocationManager.GPS_PROVIDER,0,0,myLocationListener); myLatitude.setText(String.valueOf( myLocationManager.getLastKnownLocation( LocationManager.GPS_PROVIDER).getLatitude())); myLongitude.setText(String.valueOf( myLocationManager.getLastKnownLocation( LocationManager.GPS_PROVIDER).getLongitude())); } private class MyLocationListener implements LocationListener{ public void onLocationChanged(Location argLocation) { // TODO Auto-generated method stub myLatitude.setText(String.valueOf( argLocation.getLatitude())); myLongitude.setText(String.valueOf( argLocation.getLongitude())); } public void onProviderDisabled(String provider) { // TODO Auto-generated method stub } public void onProviderEnabled(String provider) { // TODO Auto-generated method stub } public void onStatusChanged(String provider, int status, Bundle extras) { // TODO Auto-generated method stub } }; } i looked in the logcat after running the application, it seems that the following lines are cause of the problem but i don't understand it:( 01-18 22:12:46.017: WARN/dalvikvm(1091): threadid=3: thread exiting with uncaught exception (group=0x4001aa28) 01-18 22:12:46.017: ERROR/AndroidRuntime(1091): Uncaught handler: thread main exiting due to uncaught exception 01-18 22:12:46.037: ERROR/AndroidRuntime(1091): java.lang.RuntimeException: Unable to start activity ComponentInfo{pro.googleLocation/pro.googleLocation.Finder}: java.lang.NullPointerException 01-18 22:12:46.037: ERROR/AndroidRuntime(1091): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2401) 01-18 22:12:46.037: ERROR/AndroidRuntime(1091): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2417) 01-18 22:12:46.037: ERROR/AndroidRuntime(1091): at android.app.ActivityThread.access$2100(ActivityThread.java:116) 01-18 22:12:46.037: ERROR/AndroidRuntime(1091): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1794) 01-18 22:12:46.037: ERROR/AndroidRuntime(1091): at android.os.Handler.dispatchMessage(Handler.java:99) 01-18 22:12:46.037: ERROR/AndroidRuntime(1091): at android.os.Looper.loop(Looper.java:123) 01-18 22:12:46.037: ERROR/AndroidRuntime(1091): at android.app.ActivityThread.main(ActivityThread.java:4203) 01-18 22:12:46.037: ERROR/AndroidRuntime(1091): at java.lang.reflect.Method.invokeNative(Native Method) 01-18 22:12:46.037: ERROR/AndroidRuntime(1091): at java.lang.reflect.Method.invoke(Method.java:521) 01-18 22:12:46.037: ERROR/AndroidRuntime(1091): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:791) 01-18 22:12:46.037: ERROR/AndroidRuntime(1091): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:549) 01-18 22:12:46.037: ERROR/AndroidRuntime(1091): at dalvik.system.NativeStart.main(Native Method) 01-18 22:12:46.037: ERROR/AndroidRuntime(1091): Caused by: java.lang.NullPointerException 01-18 22:12:46.037: ERROR/AndroidRuntime(1091): at pro.googleLocation.Finder.onCreate(Finder.java:28) 01-18 22:12:46.037: ERROR/AndroidRuntime(1091): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1123) 01-18 22:12:46.037: ERROR/AndroidRuntime(1091): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2364) 01-18 22:12:46.037: ERROR/AndroidRuntime(1091): ... 11 more

    Read the article

  • Search for multiple values in an xml column

    - by Yuriy Gettya
    Environment: SQL Server 2012. Primary and secondary (value) index is built on xml column. Say I have a table Message with xml column WordIndex. I also have a table Word which has WordId and WordText. Xml for Message.WordIndex has the following schema: <xs:schema attributeFormDefault="unqualified" elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.example.com"> <xs:element name="wi"> <xs:complexType> <xs:sequence> <xs:element maxOccurs="unbounded" name="w"> <xs:complexType> <xs:sequence> <xs:element maxOccurs="unbounded" name="p" type="xs:unsignedByte" /> </xs:sequence> <xs:attribute name="wid" type="xs:unsignedByte" use="required" /> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> and some data to go with it: <wi xmlns="http://www.example.com"> <w wid="1"> <p>28</p> <p>72</p> <p>125</p> </w> <w wid="4"> <p>89</p> </w> <w wid="5"> <p>11</p> </w> </wi> I need to search for multiple values in my xml column WordIndex either using OR or AND. What I'm doing is fairly rudimentary, since I'm a n00b in XQuery (taken from debug output, hence real values): with xmlnamespaces(default 'http://www.example.com') select m.Subject, m.MessageId, m.WordIndex.query(' let $dummy := 0 return <word_list> { for $w in /wi/w where $w/@wid=64 return <word wid="64" pos="{data($w/p)}"/> } { for $w in /wi/w where $w/@wid=70 return <word wid="70" pos="{data($w/p)}"/> } { for $w in /wi/w where $w/@wid=63 return <word wid="63" pos="{data($w/p)}"/> } </word_list> ') as WordPosition from Message as m -- more joins go here ... where -- more conditions go here ... and m.WordIndex.exist('/wi/w[@wid=64]') = 1 and m.WordIndex.exist('/wi/w[@wid=70]') = 1 and m.WordIndex.exist('/wi/w[@wid=63]') = 1 How can this be optimized?

    Read the article

  • a disk read error occurred

    - by kellogs
    Hi, ¨a disk read error occurred¨ appears on screen after choosing to boot into Windows XP from GRUB. [root@localhost linux]# fdisk -lu Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x48424841 Device Boot Start End Blocks Id System /dev/sda1 63 204214271 102107104+ 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/sda2 204214272 255606783 25696256 af HFS / HFS+ Partition 2 does not end on cylinder boundary. /dev/sda3 255606784 276488191 10440704 c W95 FAT32 (LBA) Partition 3 does not end on cylinder boundary. /dev/sda4 276490179 312576704 18043263 5 Extended /dev/sda5 * 276490240 286709759 5109760 83 Linux /dev/sda6 286712118 310488254 11888068+ b W95 FAT32 /dev/sda7 310488318 312576704 1044193+ 82 Linux swap / Solaris sda is a 160GB hard disk with quite a few partitions and 3 OSes installed. I am able to boot into Linux and Mac OS fine, but not into Windows anymore. The Windows system is located on /dev/sda1. I can not recall how exactly have I used testdisk but it once said that ¨The harddisk /dev/sda (160GB / 149 GB) seems too small! (< 172GB / 157GB)¨ or something simillar. So far I have tried to ¨fixboot¨ and ¨chkdsk¨ from a recovery console on the affected windows partition (/dev/sda1), the plug off power cord for 15 seconds trick, reinstalling GRUB, repairing the MFT and boot sector of the affected partition via testdisk, what next please ? Thank you!

    Read the article

  • Setfacl configuration issue in Linux

    - by Balualways
    I am configuring a Linux Server with ACL[Access Control Lists]. It is not allowing me to perform setfacl operation on one of the directoriy /xfiles. I am able to perform the setfacl on other directories as /tmp /op/applocal/. I am getting the error as : root@asifdl01devv # setfacl -m user:eqtrd:rw-,user:feedmgr:r--,user::---,group::r--,mask:rw-,other:--- /xfiles/change1/testfile setfacl: /xfiles/change1/testfile: Operation not supported I have defined my /etc/fstab as /dev/ROOTVG/rootlv / ext3 defaults 1 1 /dev/ROOTVG/varlv /var ext3 defaults 1 2 /dev/ROOTVG/optlv /opt ext3 defaults 1 2 /dev/ROOTVG/crashlv /var/crash ext3 defaults 1 2 /dev/ROOTVG/tmplv /tmp ext3 defaults 1 2 LABEL=/boot /boot ext3 defaults 1 2 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 /dev/ROOTVG/swaplv swap swap defaults 0 0 /dev/APPVG/home /home ext3 defaults 1 2 /dev/APPVG/archives /archives ext3 defaults 1 2 /dev/APPVG/test /test ext3 defaults 1 2 /dev/APPVG/oracle /opt/oracle ext3 defaults 1 2 /dev/APPVG/ifeeds /xfiles ext3 defaults 1 2 I have a solaris server where the vfstab is defined as cat vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # fd - /dev/fd fd - no - /proc - /proc proc - no - /dev/vx/dsk/bootdg/swapvol - - swap - no - swap - /tmp tmpfs - yes size=1024m /dev/vx/dsk/bootdg/rootvol /dev/vx/rdsk/bootdg/rootvol / ufs 1 no logging /dev/vx/dsk/bootdg/var /dev/vx/rdsk/bootdg/var /var ufs 1 no logging /dev/vx/dsk/bootdg/home /dev/vx/rdsk/bootdg/home /home ufs 2 yes logging /dev/vx/dsk/APP/test /dev/vx/rdsk/APP/test /test vxfs 3 yes - /dev/vx/dsk/APP/archives /dev/vx/rdsk/APP/archives /archives vxfs 3 yes - /dev/vx/dsk/APP/oracle /dev/vx/rdsk/APP/oracle /opt/oracle vxfs 3 yes - /dev/vx/dsk/APP/xfiles /dev/vx/rdsk/APP/xfiles /xfiles vxfs 3 yes - I am not able to find out the issue. Any help would be appreciated.

    Read the article

  • XML: When to use attributes instead of child nodes?

    - by Rosarch
    For tree leaves in XML, when is it better to use attributes, and when is it better to use descendant nodes? For example, in the following XML document: <?xml version="1.0" encoding="utf-8" ?> <savedGame> <links> <link rootTagName="zombies" packageName="zombie" /> <link rootTagName="ghosts" packageName="ghost" /> <link rootTagName="players" packageName="player" /> <link rootTagName="trees" packageName="tree" /> </links> <locations> <zombies> <zombie> <positionX>41</positionX> <positionY>100</positionY> </zombie> <zombie> <positionX>55</positionX> <positionY>56</positionY> </zombie> </zombies> <ghosts> <ghost> <positionX>11</positionX> <positionY>90</positionY> </ghost> </ghosts> </locations> </savedGame> The <link> tag has attributes, but it could also be written as: <link> <rootTagName>trees</rootTagName> <packageName>tree</packageName> </link> Similarly, the location tags could be written as: <zombie positionX="55" positionY="56" /> instead of: <zombie> <positionX>55</positionX> <positionY>56</positionY> </zombie> What reasons are there to prefer one over the other? Is it just a stylistic issue? Any performance considerations?

    Read the article

< Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >