Search Results

Search found 20833 results on 834 pages for 'oracle advice'.

Page 822/834 | < Previous Page | 818 819 820 821 822 823 824 825 826 827 828 829  | Next Page >

  • NoHostAvailableException With Cassandra & DataStax Java Driver If Large ResultSet

    - by hughj
    The setup: 2-node Cassandra 1.2.6 cluster replicas=2 very large CQL3 table with no secondary index Rowkey is a UUID.randomUUID().toString() read consistency set to ONE Using DataStax java driver 1.0 The request: Attempting to do a table scan by "SELECT some-col from schema.table LIMIT nnn;" The fail: Once I go beyond a certain nnn LIMIT, I start to get NoHostAvailableExceptions from the driver. It reads like this: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /10.181.13.239 ([/10.181.13.239] Unexpected exception triggered)) at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:64) at com.datastax.driver.core.ResultSetFuture.extractCauseFromExecutionException(ResultSetFuture.java:214) at com.datastax.driver.core.ResultSetFuture.getUninterruptibly(ResultSetFuture.java:169) at com.jpmc.es.rtm.storage.impl.EventExtract.main(EventExtract.java:36) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120) Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /10.181.13.239 ([/10.181.13.239] Unexpected exception triggered)) at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:98) at com.datastax.driver.core.RequestHandler$1.run(RequestHandler.java:165) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) Given: This is probably not the most enlightened thing to do to a large table with millions of rows, but this is how I learn what not to do, so I would really appreciate someone who could volunteer how this kind of error can be debugged. For example, when this happens, there are no indications that the nodes in the cluster ever had an issue with the request (there is nothing in the logs on either node that indicate any timeout or failure). Also, I enabled the trace on the driver, which gives you some nice autotrace (ala Oracle) info as long as the query succeeds. But in this case, the driver blows a NoHostAvailableException and no ExecutionInfo is available, so tracing has not provided any benefit in this case. I also find it interesting that this does not seem to be recorded as a timeout (my JMX consoles tell me no timeouts have occurred). So, I am left not understanding WHERE the failure is actually occurring. I am left with the idea that it is the driver that is having a problem, but I don't know how to debug it (and I would really like to). I have read several posts from folks that state that query'g for resultSets 10000 rows is probably not a good idea, and I am willing to accept this, but I would like to understand what is causing the exception and where the exception is happening. FWIW, I also tried bumping the timeout properties in the cassandra.yaml, but this made no difference whatsoever. I welcome any suggestions, anecdotes, insults, or monetary contributions for my registration in the house of moron-developers. Regards!!

    Read the article

  • Calling a WCF service from Java

    - by Ian Kemp
    As the title says, I need to get some Java 1.5 code to call a WCF web service. I've downloaded and used Metro to generate Java proxy classes, but they aren't generating what I expect, and I believe this is because of the WSDL that the WCF service generates. My WCF classes look like this (full code omitted for brevity): public class TestService : IService { public TestResponse DoTest(TestRequest request) { TestResponse response = new TestResponse(); // actual testing code... response.Result = ResponseResult.Success; return response; } } public class TestResponse : ResponseMessage { public bool TestSucceeded { get; set; } } public class ResponseMessage { public ResponseResult Result { get; set; } public string ResponseDesc { get; set; } public Guid ErrorIdentifier { get; set; } } public enum ResponseResult { Success, Error, Empty, } and the resulting WSDL (when I browse to http://localhost/TestService?wsdl=wsdl0) looks like this: <xsd:element name="TestResponse"> <xsd:complexType> <xsd:sequence> <xsd:element minOccurs="0" name="TestSucceeded" type="xsd:boolean" /> </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:element name="ErrorIdentifier" type="q1:guid" xmlns:q1="http://schemas.microsoft.com/2003/10/Serialization/" /> <xsd:simpleType name="ResponseResult"> <xsd:restriction base="xsd:string"> <xsd:enumeration value="Error" /> <xsd:enumeration value="Success" /> <xsd:enumeration value="EmptyResult" /> </xsd:restriction> </xsd:simpleType> <xsd:element name="ResponseResult" nillable="true" type="tns:ResponseResult" /> <xsd:element name="Result" type="tns:ResponseResult" /> <xsd:element name="ResultDesc" nillable="true" type="xsd:string" /> ... <xs:element name="guid" nillable="true" type="tns:guid" /> <xs:simpleType name="guid"> <xs:restriction base="xs:string"> <xs:pattern value="[\da-fA-F]{8}-[\da-fA-F]{4}-[\da-fA-F]{4}-[\da-fA-F]{4}-[\da-fA-F]{12}" /> </xs:restriction> </xs:simpleType> Immediately I see an issue with this WSDL: TestResponse does not contain the properties inherited from ResponseMessage. Since this service has always worked in Visual Studio I've never questioned this before, but maybe that could be causing my problem? Anyhow, when I run Metro's wsimport.bat on the service the following error message is generated: [WARNING] src-resolve.4.2: Error resolving component 'q1:guid' and the outputted Java version of TestResponse lacks any of the properties from ResponseMessage. I hacked the WSDL a bit and changed ErrorIdentifier to be typed as xsd:string, which makes the message about resolving the GUID type go away, but I still don't get any of ResponseMessage's properties. Finally, I altered the WSDL to include the 3 properties from ResponseMessage in TestResponse, and of course the end result is that the generated .java file contains them. However, when I actually call the WCF service from Java, those 3 properties are always null. Any advice, apart from writing the proxy classes myself?

    Read the article

  • Is it worth moving from stored procedures to linq ?

    - by Josef
    I'm looking at standardizing programming in an organisaiton. Half uses stored procedures and the other half Linq. From what i've read there is still some debate going on on this topic. My concern is that MS is trying to slip in it's own proprietry query language 'linq' to make SQL redundant. If a few years back microsoft had tried to win customers from oracle and sybase with their MSSQL database and stated that it didn't use SQL by their own proprietry query langues ie linq. I doubt many would have switched. I believe that is exactly what is happening now by introducting it into the applicaiton business layer. I have used MS for many years but there is one gripe that I have with them and that is that they change their direction a lot. By a lot I mean new releases of .net, silverlight etc are more than 30% different from previous version. So by the time you become productive a new release is on the way. As things stand now a web developer using .net would need to know either vb.net or c#, xml, xaml,javascript,html, sql and now linq. That doesn't make for good productivity in my books. My concern is that once we all start using linq MS will start changing it between releases. and it will become an ever changing landscape. I believe that 'linq to sql' has already been deprecated. At leas with SQL we are dealing with a more stable and standardized language. Are we looking at a programming revolution or a marketing campaign? As far as I know other languages like Cobol have stayed the same for years. A cobol program from 20 years ago could pick up todays code and start working on it. Could a Vb3 person work on a modern .net web app ? Would these large changes need to be made if the underlying original foundation had been sound ? I worry about following MS shaking roadmap with it's deadends and double backs. are there any architects out there who feel the same ? regards Josef

    Read the article

  • How to share internet over VPN and inside a virtual machine (Windows)?

    - by mountrix
    ` My final goal is to have a virtual machine at work in which anything that happen inside (tcp, udp, ping, ...) will use the Internet connection of a computer at home. So, if inside this VM should I open an Internet browser to a site such as "show my IP", my home IP should be printed. I am also looking for a way to debug/develop a software inside this VM, but I would like to tunnel only the connections of this software, not the full graphical interface, this is why a Remote Desktop solution won't fit me. The connection between the both computer should be secured somehow, like in a SSH tunnel. This ultimately should allow me to have a portable VM in which I can connect to whatever networks I have access at home, in a secure way. This is my configuration: At work, I have a LAN-connected desktop computer, with Windows 7 Professional Edition as a host [computer W] On this same computer, I have a Virtual Box machine running Windows XP [computer V] At home, I have a laptop computer, running Windows 7 Home Edition [computer H] This laptop is connected to a Livebox 2 broadband modem by Wifi. What I am trying to do is to sit at work in front of the virtual machine [V], and connect to a webpage as if the request was issued from the laptop [H] at home, and the data should be securely tunneled between the both. But if I am using internet directly inside [W], it should use the normal LAN interface at work. To achieve my goal, I first try using VPN, than SSH tunneling, without success. I first tried to install Teamviewer between [W] and [H]. This is working fine, I can send files, share desktop, etc. Teamviewer has a VPN mode that creates a new VPN network interface with its own IP, both on computer [W] and [H]. This allowed me to connect [H] as a network computer inside [W] and I was able to share files, but not to share Internet. At this point, I tried to use from [W] the Internet as if I was at home. I setup a route (using route add from command line in [W]) in order to instruct each packet going to a given website to pass by the new VPN interface on [W], with the hope it will be forwarded to [H], but the webpage was simply inaccessible. I then tried to setup a Windows VPN connection between [W] and [H], using the Windows 7 VPN feature. [H] was the server and [W] the client. But it failed: I got the "Unable to join a remote PC while trying to VPN" 720 Error when I was setting up the client on [W]. I think the problem is the Livebox 2 that could blocks the packets. But I am not sure of this: 1) with Teamviewer it works fine, 2) Livebox 2 has a configuration page for port mapping that gives the proper configuration to map VPN ports as an example so I guess that it should allow it, 3) I opened the ports 1723 (TCP) and 500 (UDP) according to some forums. Virtual box has a network configuration parameter in which I can use the VPN network interface created by Teamviewer as a bridged connection. This is suppose to work in the sense that all packets issued by the virtual machine [V] is supposed to go directly to [H]. But I had no internet connection inside [V]. Using the NAT mode, [V] has internet. For me this is the feature that I look for: filtering all connections from the virtual box application to the VPN network interface, and the remaining should use the normal LAN interface. Apart from the build-in feature of VBox, I even do not know if it is possible to route the packet from a given application to a given interface. Finally I tried also SSH tunneling, but this is not the solution I looked for. Using an external SSH server (Linux), I was able to create a localhost connection on [W] (or [V]), using something like 'ssh -N -D server[H]' in order to allow a web browser located in [W] to connect to any website using the SOCKS 5 proxy created locally (SOCKS is a build-in feature of SSH). But repeating the same operation on windows, using a windows SSH server inside [W] (I tried freeSSHd), it failed: SFTP worked, but not the SOCKS tunneling, it was like the browser in [H] did not find internet. Finally only Teamviewer looked able to create a VPN between [W] and [H], but I am not able to use it, as I want, I mean using the Internet connection of [H] sitting in front of [W]. I also tried to bridge the VPN interface and the wifi interface inside [H], but it blocked my laptop, and I tried also the Internet Connection Sharing, trying to share on [H] the wifi connection over the VPN interface. This fails also, but it seems because Teamviewer actually use the wifi interface to be able to provide the VPN link, so I guess I am creating a recursive loop. I do not know what to try next... Thank you for any advice!!

    Read the article

  • Should we develop a custom membership provider in this case?

    - by Allen
    I'll be adding a bounty to this, probably 200, more if you guys think its appropriate. I wont accept an answer until I can add a bounty so feel free to go ahead and answer now Summary Long story short, we've been tasked with gutting the authentication and authorization parts of a fairly old and bloated asp.net application that previously had all of these components written from scratch. Since our application isn't a typical one, and none of us have experience in asp.net's built in membership provider stuff, we're not sure if we should roll our own authentication and authorization again or if we should try to work within the asp.net membership provider mindset and develop our own membership provider. Our Application We have a fairly old asp.net application that gets installed at customer locations to service clients on a LAN. Admins create users (users do not sign up) and depending on the install, we may have the software integrated with LDAP. Currently, the LDAP integration bulk-imports the users to our database and when they login, it authenticates against LDAP so we dont have to manage their passwords. Nothing amazing there. Admins can assign users to 1 group and they can change the authorization of that group to manage access to various parts of the software. Groups are maintained by Admins (web based UI) and as said earlier, granted / denied permissions to certain functionality within the application. All this was completely written from the ground up without using any of the built in .net authorization or authentication. We literally have IsLoggedIn() methods that check for login and redirect to our login page if they aren't. Our Rewrite We've been tasked to integrate more tightly with LDAP, they want us to tie groups in our application to groups (or whatever types of containers that LDAP uses) in LDAP so that when a customer opt's to use our LDAP integration, they dont have to manage their users in LDAP AND in our application. The new way, they will simply create users in LDAP, add them to Groups in LDAP and our application will see that they belong to the appropriate LDAP group and authenticate and authorize them. In addition, we've been granted the go ahead to completely rip out the User authentication and authorization code and completely re-do it. Our Problem The problem is that none of us have any experience with asp.net membership provider functionality. The little bit of exposure I have to it makes me worry that it was not intended to be used for an application such as ours. Though, developing our own ASP.NET Membership Provider and Role Manager sounds like it would be a great experience and most likely the appropriate thing to do. Basically, I'm looking for advice, should we be using the ASP.NET Membership provider & Role Management API or should we continue to roll our own? I know this decision will be influenced by our requirements so I'm going over them below Our Requirements Just a quick n dirty list Maintain the ability to have a db of users and authenticate them and give admins (only, not users) the ability to CRUD users Allow the site to integrate with LDAP, when this is chosen, they don't want any users stored in the DB, only the relationship between Groups as they exist in our app / db and the Groups/Containers as they exist in LDAP. .net 3.5 is being used (mix of asp.net webforms and asp.net mvc) Has to work in ASP.NET and ASP.NET MVC (shouldn't be a problem I'm guessing) This can't be user centric, administrators need to be the only ones that CRUD (or import via ldap) users and groups We have to be able to Auth via LDAP when its configured to do so I always try to monitor my questions closely so feel free to ask for more info. Also, as a general summary of what I'm looking for in an answer is just. "You should/shouldn't use xyz, here's why". Links regarding asp.net membership provider and role management stuff are very welcome, most of the stuff I'm finding is 5+ years old. Edit: Added some stuff to "Our Rewrite"

    Read the article

  • How can I get Firefox to update background-color on a:hover *before* a javascript routine is run?

    - by Rob
    I'm having a Firefox-specific issue with a script I wrote to create 3d layouts. The correct behavior is that the script pulls the background-color from an element and then uses that color to draw on the canvas. When a user mouses over a link and the background-color changes to the :hover rule, the color being drawn changes on the canvas changes as well. When the user mouses out, the color should revert back to non-hover color. This works as expected in Webkit browsers and Opera, but it seems like Firefox doesn't update the background-color in CSS immediately after a mouseout event occurs, so the current background-color doesn't get drawn if a mouseout occurs and it isn't followed up by another event that calls the draw() routine. It works just fine in Opera, Chrome, and Safari. How can I get Firefox to cooperate? I'm including the code that I believe is most relevant to my problem. Any advice on how I fix this problem and get a consistent effect would be very helpful. function drawFace(coord, mid, popColor,gs,x1,x2,side) { /*Gradients in our case run either up/down or left right. We have two algorithms depending on whether or not it's a sideways facing piece. Rather than parse the "rgb(r,g,b)" string(popColor) retrieved from elsewhere, it is simply offset with the gs variable to give the illusion that it starts at a darker color.*/ var canvas = document.getElementById('depth'); //This is for excanvas.js var G_vmlCanvasManager; if (G_vmlCanvasManager != undefined) { // ie IE G_vmlCanvasManager.initElement(canvas); } //Init canvas if (canvas.getContext) { var ctx = canvas.getContext('2d'); if (side) var lineargradient=ctx.createLinearGradient(coord[x1][0]+gs,mid[1],mid[0],mid[1]); else var lineargradient=ctx.createLinearGradient(coord[0][0],coord[2][1]+gs,coord[0][0],mid[1]); lineargradient.addColorStop(0,popColor); lineargradient.addColorStop(1,'black'); ctx.fillStyle=lineargradient; ctx.beginPath(); //Draw from one corner to the midpoint, then to the other corner, //and apply a stroke and a fill. ctx.moveTo(coord[x1][0],coord[x1][1]); ctx.lineTo(mid[0],mid[1]); ctx.lineTo(coord[x2][0],coord[x2][1]); ctx.stroke(); ctx.fill(); } } function draw(e) { var arr = new Array() var i = 0; var mid = new Array(2); $(".pop").each(function() { mid[0]=Math.round($(document).width()/2); mid[1]=Math.round($(document).height()/2); arr[arr.length++]=new getElemProperties(this,mid); i++; }); arr.sort(sortByDistance); clearCanvas(); for (a=0;a<i;a++) { /*In the following conditional statements, we're testing to see which direction faces should be drawn, based on a 1-point perspective drawn from the midpoint. In the first statement, we're testing to see if the lower-left hand corner coord[3] is higher on the screen than the midpoint. If so, we set it's gradient starting position to start at a point in space 60pixels higher(-60) than the actual side, and we also declare which corners make up our face, in this case the lower two corners, coord[3], and coord[2].*/ if (arr[a].bottomFace) drawFace(arr[a].coord,mid,arr[a].popColor,-60,3,2); if (arr[a].topFace) drawFace(arr[a].coord,mid,arr[a].popColor,60,0,1); if (arr[a].leftFace) drawFace(arr[a].coord,mid,arr[a].popColor,60,0,3,true); if (arr[a].rightFace) drawFace(arr[a].coord,mid,arr[a].popColor,-60,1,2,true); } } $("a.pop").bind("mouseenter mouseleave focusin focusout",draw); If you need to see the effect in action, or if you want the full javascript code, you can check it out here: http://www.robnixondesigns.com/strangematter/

    Read the article

  • Performance required to improve Windows Experience Index?

    - by Ian Boyd
    Is there a guide on the metrics required to obtain a certain Windows Experience Index? A Microsoft guy said in January 2009: On the matter of transparency, it is indeed our plan to disclose in great detail how the scores are calculated, what the tests attempt to measure, why, and how they map to realistic scenarios and usage patterns. Has that amount of transparency happened? Is there a technet article somewhere? If my score was limited by my Memory subscore of 5.9. A nieve person would suggest: Buy a faster RAM Which is wrong of course. From the Windows help: If your computer has a 64-bit central processing unit (CPU) and 4 gigabytes (GB) or less random access memory (RAM), then the Memory (RAM) subscore for your computer will have a maximum of 5.9. You can buy the fastest, overclocked, liquid-cooled, DDR5 RAM on the planet; you'll still have a maximum Memory subscore of 5.9. So in general the knee-jerk advice "buy better stuff" is not helpful. What i am looking for is attributes required to achieve a certain score, or move beyond a current limitation. The information i've been able to compile so far, chiefly from 3 Windows blog entries, and an article: Memory subscore Score Conditions ======= ================================ 1.0 < 256 MB 2.0 < 500 MB 2.9 <= 512 MB 3.5 < 704 MB 3.9 < 944 MB 4.5 <= 1.5 GB 5.9 < 4.0GB-64MB on a 64-bit OS Windows Vista highest score 7.9 Windows 7 highest score Graphics Subscore Score Conditions ======= ====================== 1.0 doesn't support DX9 1.9 doesn't support WDDM 4.9 does not support Pixel Shader 3.0 5.9 doesn't support DX10 or WDDM1.1 Windows Vista highest score 7.9 Windows 7 highest score Gaming graphics subscore Score Result ======= ============================= 1.0 doesn't support D3D 2.0 supports D3D9, DX9 and WDDM 5.9 doesn't support DX10 or WDDM1.1 Windows Vista highest score 6.0-6.9 good framerates (e.g. 40-50fps) at normal resoltuions (e.g. 1280x1024) 7.0-7.9 even higher framerates at even higher resolutions 7.9 Windows 7 highest score Processor subscore Score Conditions ======= ========================================================================== 5.9 Windows Vista highest score 6.0-6.9 many quad core processors will be able to score in the high 6 low 7 ranges 7.0+ many quad core processors will be able to score in the high 6 low 7 ranges 7.9 8-core systems will be able to approach 8.9 Windows 7 highest score Primary hard disk subscore (note) Score Conditions ======= ======================================== 1.9 Limit for pathological drives that stop responding when pending writes 2.0 Limit for pathological drives that stop responding when pending writes 2.9 Limit for pathological drives that stop responding when pending writes 3.0 Limit for pathological drives that stop responding when pending writes 5.9 highest you're likely to see without SSD Windows Vista highest score 7.9 Windows 7 highest score Bonus Chatter You can find your WEI detailed test results in: C:\Windows\Performance\WinSAT\DataStore e.g. 2011-11-06 01.00.19.482 Disk.Assessment (Recent).WinSAT.xml <WinSAT> <WinSPR> <DiskScore>5.9</DiskScore> </WinSPR> <Metrics> <DiskMetrics> <AvgThroughput units="MB/s" score="6.4" ioSize="65536" kind="Sequential Read">89.95188</AvgThroughput> <AvgThroughput units="MB/s" score="4.0" ioSize="16384" kind="Random Read">1.58000</AvgThroughput> <Responsiveness Reason="UnableToAssess" Kind="Cap">TRUE</Responsiveness> </DiskMetrics> </Metrics> </WinSAT> Pre-emptive snarky comment: "WEI is useless, it has no relation to reality" Fine, how do i increase my hard-drive's random I/O throughput? Update - Amount of memory limits rating Some people don't believe Microsoft's statement that having less than 4GB of RAM on a 64-bit edition of Windows doesn't limit the rating to 5.9: And from xxx.Formal.Assessment (Recent).WinSAT.xml: <WinSPR> <LimitsApplied> <MemoryScore> <LimitApplied Friendly="Physical memory available to the OS is less than 4.0GB-64MB on a 64-bit OS : limit mem score to 5.9" Relation="LT">4227858432</LimitApplied> </MemoryScore> </LimitsApplied> </WinSPR> References Windows Vista Team Blog: Windows Experience Index: An In-Depth Look Understand and improve your computer's performance in Windows Vista Engineering Windows 7 Blog: Engineering the Windows 7 “Windows Experience Index”

    Read the article

  • How to parse HTML with TouchXML or some other alternative.

    - by 0SX
    Hi, I'm trying to parse the HTML presented below with TouchXML but it keeps crashing when I try to extract certain attributes. I'm totally new to the parser world so I apologize for being a complete idiot. I need help to parse this HTML. What I'm trying to accomplish is to parse each attribute and value or what not and copy them to a string. I've been trying to find a good parser to parse HTML and I believe TouchXML is the best I've seen because of Tidy. Speaking of Tidy, How could I run this HTML through Tidy first then parse it? I'm not sure how to do this. Here is the code that I have so far that doesn't work due to it's not pulling everything I need from the HTML. Any help or advice would be much appreciated. Thanks My current code: NSMutableArray *res = [[NSMutableArray alloc] init]; // using local resource file NSString *XMLPath = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:@"example.html"]; NSData *XMLData = [NSData dataWithContentsOfFile:XMLPath]; CXMLDocument *doc = [[[CXMLDocument alloc] initWithData:XMLData options:0 error:nil] autorelease]; NSArray *nodes = NULL; nodes = [doc nodesForXPath:@"//div" error:nil]; for (CXMLElement *node in nodes) { NSMutableDictionary *item = [[NSMutableDictionary alloc] init]; [item setObject:[[node attributeForName:@"id"] stringValue] forKey:@"id"]; [res addObject:item]; [item release]; } NSLog(@"%@", res); [res release]; HTML file that needs to be parsed: <html> <head> <base target="_blank" /> </head> <body style="margin:2;"> <div id="group"> <div id="groupURL"><a href="http://www.example.com/groups">Group URL</a></div> <img id="grouplogo" src="http://images.example.com/groups/image.png" /> <div id="groupcomputer"><a href="http://www.example.com/groups/page" title="Group Title">Group title this would be here</a></div> <div id="groupinfos"> <div id="groupinfo-l">Person</div><div id="groupinfo-r">Ralph</div> <div id="groupinfo-l">Years</div><div id="groupinfo-r">4 years</div> <div id="groupinfo-l">Salary</div><div id="groupinfo-r">100K</div> <div id="groupinfo-l">Other</div><div id="groupoth" style="width:15px">other info</div> </body> </html> EDIT: I could use Element Parser but I need to know how to extract the Person's Name from the following example which would be Ralph in this case. <div id="groupinfo-l">Person</div><div id="groupinfo-r">Ralph</div>

    Read the article

  • Slow queries in Rails- not sure if my indexes are being used.

    - by Max Williams
    I'm doing a quite complicated find with lots of includes, which rails is splitting into a sequence of discrete queries rather than do a single big join. The queries are really slow - my dataset isn't massive, with none of the tables having more than a few thousand records. I have indexed all of the fields which are examined in the queries but i'm worried that the indexes aren't helping for some reason: i installed a plugin called "query_reviewer" which looks at the queries used to build a page, and lists problems with them. This states that indexes AREN'T being used, and it features the results of calling 'explain' on the query, which lists various problems. Here's an example find call: Question.paginate(:all, {:page=>1, :include=>[:answers, :quizzes, :subject, {:taggings=>:tag}, {:gradings=>[:age_group, :difficulty]}], :conditions=>["((questions.subject_id = ?) or (questions.subject_id = ? and tags.name = ?))", "1", 19, "English"], :order=>"subjects.name, (gradings.difficulty_id is null), gradings.age_group_id, gradings.difficulty_id", :per_page=>30}) And here are the generated sql queries: SELECT DISTINCT `questions`.id FROM `questions` LEFT OUTER JOIN `taggings` ON `taggings`.taggable_id = `questions`.id AND `taggings`.taggable_type = 'Question' LEFT OUTER JOIN `tags` ON `tags`.id = `taggings`.tag_id LEFT OUTER JOIN `subjects` ON `subjects`.id = `questions`.subject_id LEFT OUTER JOIN `gradings` ON gradings.question_id = questions.id WHERE (((questions.subject_id = '1') or (questions.subject_id = 19 and tags.name = 'English'))) ORDER BY subjects.name, (gradings.difficulty_id is null), gradings.age_group_id, gradings.difficulty_id LIMIT 0, 30 SELECT `questions`.`id` AS t0_r0 <..etc...> FROM `questions` LEFT OUTER JOIN `answers` ON answers.question_id = questions.id LEFT OUTER JOIN `quiz_questions` ON (`questions`.`id` = `quiz_questions`.`question_id`) LEFT OUTER JOIN `quizzes` ON (`quizzes`.`id` = `quiz_questions`.`quiz_id`) LEFT OUTER JOIN `subjects` ON `subjects`.id = `questions`.subject_id LEFT OUTER JOIN `taggings` ON `taggings`.taggable_id = `questions`.id AND `taggings`.taggable_type = 'Question' LEFT OUTER JOIN `tags` ON `tags`.id = `taggings`.tag_id LEFT OUTER JOIN `gradings` ON gradings.question_id = questions.id LEFT OUTER JOIN `age_groups` ON `age_groups`.id = `gradings`.age_group_id LEFT OUTER JOIN `difficulties` ON `difficulties`.id = `gradings`.difficulty_id WHERE (((questions.subject_id = '1') or (questions.subject_id = 19 and tags.name = 'English'))) AND `questions`.id IN (602, 634, 666, 698, 730, 762, 613, 645, 677, 709, 741, 592, 624, 656, 688, 720, 752, 603, 635, 667, 699, 731, 763, 614, 646, 678, 710, 742, 593, 625) ORDER BY subjects.name, (gradings.difficulty_id is null), gradings.age_group_id, gradings.difficulty_id SELECT count(DISTINCT `questions`.id) AS count_all FROM `questions` LEFT OUTER JOIN `answers` ON answers.question_id = questions.id LEFT OUTER JOIN `quiz_questions` ON (`questions`.`id` = `quiz_questions`.`question_id`) LEFT OUTER JOIN `quizzes` ON (`quizzes`.`id` = `quiz_questions`.`quiz_id`) LEFT OUTER JOIN `subjects` ON `subjects`.id = `questions`.subject_id LEFT OUTER JOIN `taggings` ON `taggings`.taggable_id = `questions`.id AND `taggings`.taggable_type = 'Question' LEFT OUTER JOIN `tags` ON `tags`.id = `taggings`.tag_id LEFT OUTER JOIN `gradings` ON gradings.question_id = questions.id LEFT OUTER JOIN `age_groups` ON `age_groups`.id = `gradings`.age_group_id LEFT OUTER JOIN `difficulties` ON `difficulties`.id = `gradings`.difficulty_id WHERE (((questions.subject_id = '1') or (questions.subject_id = 19 and tags.name = 'English'))) Actually, looking at these all nicely formatted here, there's a crazy amount of joining going on here. This can't be optimal surely. Anyway, it looks like i have two questions. 1) I have an index on each of the ids and foreign key fields referred to here. The second of the above queries is the slowest, and calling explain on it (doing it directly in mysql) gives me the following: +----+-------------+----------------+--------+---------------------------------------------------------------------------------+-------------------------------------------------+---------+------------------------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------------+--------+---------------------------------------------------------------------------------+-------------------------------------------------+---------+------------------------------------------------+------+----------------------------------------------+ | 1 | SIMPLE | questions | range | PRIMARY,index_questions_on_subject_id | PRIMARY | 4 | NULL | 30 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | answers | ref | index_answers_on_question_id | index_answers_on_question_id | 5 | millionaire_development.questions.id | 2 | | | 1 | SIMPLE | quiz_questions | ref | index_quiz_questions_on_question_id | index_quiz_questions_on_question_id | 5 | millionaire_development.questions.id | 1 | | | 1 | SIMPLE | quizzes | eq_ref | PRIMARY | PRIMARY | 4 | millionaire_development.quiz_questions.quiz_id | 1 | | | 1 | SIMPLE | subjects | eq_ref | PRIMARY | PRIMARY | 4 | millionaire_development.questions.subject_id | 1 | | | 1 | SIMPLE | taggings | ref | index_taggings_on_taggable_id_and_taggable_type,index_taggings_on_taggable_type | index_taggings_on_taggable_id_and_taggable_type | 263 | millionaire_development.questions.id,const | 1 | | | 1 | SIMPLE | tags | eq_ref | PRIMARY | PRIMARY | 4 | millionaire_development.taggings.tag_id | 1 | Using where | | 1 | SIMPLE | gradings | ref | index_gradings_on_question_id | index_gradings_on_question_id | 5 | millionaire_development.questions.id | 2 | | | 1 | SIMPLE | age_groups | eq_ref | PRIMARY | PRIMARY | 4 | millionaire_development.gradings.age_group_id | 1 | | | 1 | SIMPLE | difficulties | eq_ref | PRIMARY | PRIMARY | 4 | millionaire_development.gradings.difficulty_id | 1 | | +----+-------------+----------------+--------+---------------------------------------------------------------------------------+-------------------------------------------------+---------+------------------------------------------------+------+----------------------------------------------+ The query_reviewer plugin has this to say about it - it lists several problems: Table questions: Using temporary table, Long key length (263), Using filesort MySQL must do an extra pass to find out how to retrieve the rows in sorted order. To resolve the query, MySQL needs to create a temporary table to hold the result. The key used for the index was rather long, potentially affecting indices in memory 2) It looks like rails isn't splitting this find up in a very optimal way. Is it, do you think? Am i better off doing several find queries manually rather than one big combined one? Grateful for any advice, max

    Read the article

  • Deserialize an XML file - an error in xml document (1,2)

    - by Lindsay Fisher
    I'm trying to deserialize an XML file which I receive from a vendor with XmlSerializer, however im getting this exception: There is an error in XML document (1, 2).InnerException Message "<delayedquotes xmlns=''> was not expected.. I've searched the stackoverflow forum, google and implemented the advice, however I'm still getting the same error. Please find the enclosed some content of the xml file: <delayedquotes id="TestData"> <headings> <title/> <bid>bid</bid> <offer>offer</offer> <trade>trade</trade> <close>close</close> <b_time>b_time</b_time> <o_time>o_time</o_time> <time>time</time> <hi.lo>hi.lo</hi.lo> <perc>perc</perc> <spot>spot</spot> </headings> <instrument id="Test1"> <title id="Test1">Test1</title> <bid>0</bid> <offer>0</offer> <trade>0</trade> <close>0</close> <b_time>11:59:00</b_time> <o_time>11:59:00</o_time> <time>11:59:00</time> <perc>0%</perc> <spot>0</spot> </instrument> </delayedquotes> and the code [Serializable, XmlRoot("delayedquotes"), XmlType("delayedquotes")] public class delayedquotes { [XmlElement("instrument")] public string instrument { get; set; } [XmlElement("title")] public string title { get; set; } [XmlElement("bid")] public double bid { get; set; } [XmlElement("spot")] public double spot { get; set; } [XmlElement("close")] public double close { get; set; } [XmlElement("b_time")] public DateTime b_time { get; set; } [XmlElement("o_time")] public DateTime o_time { get; set; } [XmlElement("time")] public DateTime time { get; set; } [XmlElement("hi")] public string hi { get; set; } [XmlElement("lo")] public string lo { get; set; } [XmlElement("offer")] public double offer { get; set; } [XmlElement("trade")] public double trade { get; set; } public delayedquotes() { } }

    Read the article

  • Need help to debug application using .Net and MySQL

    - by Tim Murphy
    What advice can you give me on how to track down a bug I have while inserting data into MySQL database with .Net? The error message is: MySql.Data.MySqlClient.MySqlException: Duplicate entry '26012' for key 'StockNumber_Number_UNIQUE' Reviewing of the log proves that StockNumber_Number of 26012 has not been inserted yet. Products in use. Visual Studio 2008. mysql.data.dll 6.0.4.0. Windows 7 Ultimate 64 bit and Windows 2003 32 bit. Custom built ORM framework (have source code). Importing data from Access 2003 database. The code works fine for 3000 - 5000 imports. The record being imported that causes the problem in a full run works fine if just importing by itself. I've also seen the error on other records if I sort the data to be imported a different way. Have tried import with and without transactions. Have logged the hell out of the system. The SQL command to create the table: CREATE TABLE `RareItems_RareItems` ( `RareItemKey` CHAR(36) NOT NULL PRIMARY KEY, `StockNumber_Text` VARCHAR(7) NOT NULL, `StockNumber_Number` INT NOT NULL AUTO_INCREMENT, UNIQUE INDEX `StockNumber_Number_UNIQUE` (`StockNumber_Number` ASC), `OurPercentage` NUMERIC , `SellPrice` NUMERIC(19, 2) , `Author` VARCHAR(250) , `CatchWord` VARCHAR(250) , `Title` TEXT , `Publisher` VARCHAR(250) , `InternalNote` VARCHAR(250) , `DateOfPublishing` VARCHAR(250) , `ExternalNote` LONGTEXT , `Description` LONGTEXT , `Scrap` LONGTEXT , `SuppressionKey` CHAR(36) NOT NULL, `TypeKey` CHAR(36) NOT NULL, `CatalogueStatusKey` CHAR(36) NOT NULL, `CatalogueRevisedDate` DATETIME , `CatalogueRevisedByKey` CHAR(36) NOT NULL, `CatalogueToBeRevisedByKey` CHAR(36) NOT NULL, `DontInsure` BIT NOT NULL, `ExtraCosts` NUMERIC(19, 2) , `IsWebReady` BIT NOT NULL, `LocationKey` CHAR(36) NOT NULL, `LanguageKey` CHAR(36) NOT NULL, `CatalogueDescription` VARCHAR(250) , `PlacePublished` VARCHAR(250) , `ToDo` LONGTEXT , `Headline` VARCHAR(250) , `DepartmentKey` CHAR(36) NOT NULL, `Temp1` INT , `Temp2` INT , `Temp3` VARCHAR(250) , `Temp4` VARCHAR(250) , `InternetStatusKey` CHAR(36) NOT NULL, `InternetStatusInfo` LONGTEXT , `PurchaseKey` CHAR(36) NOT NULL, `ConsignmentKey` CHAR(36) , `IsSold` BIT NOT NULL, `RowCreated` DATETIME NOT NULL, `RowModified` DATETIME NOT NULL ); The SQL command and parameters to insert the record: INSERT INTO `RareItems_RareItems` (`RareItemKey`, `StockNumber_Text`, `StockNumber_Number`, `OurPercentage`, `SellPrice`, `Author`, `CatchWord`, `Title`, `Publisher`, `InternalNote`, `DateOfPublishing`, `ExternalNote`, `Description`, `Scrap`, `SuppressionKey`, `TypeKey`, `CatalogueStatusKey`, `CatalogueRevisedDate`, `CatalogueRevisedByKey`, `CatalogueToBeRevisedByKey`, `DontInsure`, `ExtraCosts`, `IsWebReady`, `LocationKey`, `LanguageKey`, `CatalogueDescription`, `PlacePublished`, `ToDo`, `Headline`, `DepartmentKey`, `Temp1`, `Temp2`, `Temp3`, `Temp4`, `InternetStatusKey`, `InternetStatusInfo`, `PurchaseKey`, `ConsignmentKey`, `IsSold`, `RowCreated`, `RowModified`) VALUES (@RareItemKey, @StockNumber_Text, @StockNumber_Number, @OurPercentage, @SellPrice, @Author, @CatchWord, @Title, @Publisher, @InternalNote, @DateOfPublishing, @ExternalNote, @Description, @Scrap, @SuppressionKey, @TypeKey, @CatalogueStatusKey, @CatalogueRevisedDate, @CatalogueRevisedByKey, @CatalogueToBeRevisedByKey, @DontInsure, @ExtraCosts, @IsWebReady, @LocationKey, @LanguageKey, @CatalogueDescription, @PlacePublished, @ToDo, @Headline, @DepartmentKey, @Temp1, @Temp2, @Temp3, @Temp4, @InternetStatusKey, @InternetStatusInfo, @PurchaseKey, @ConsignmentKey, @IsSold, @RowCreated, @RowModified) @RareItemKey = 0b625bd6-776d-43d6-9405-e97159d172a6 @StockNumber_Text = 199305 @StockNumber_Number = 26012 @OurPercentage = 22.5 @SellPrice = 1250 @Author = SPARRMAN, Anders. @CatchWord = COOK: SECOND VOYAGE @Title = A Voyage Round the World with Captain James Cook in H.M.S. Resolution… Introduction and notes by Owen Rutter, wood engravings by Peter Barker-Mill. @Publisher = @InternalNote = @DateOfPublishing = 1944 @ExternalNote = The first English translation of Sparrman’s narrative, which had originally been published in Sweden in 1802-1818, and the only complete version of his account to appear in English. The eighteenth-century translation had appeared some time before the Swedish publication of the final sections of his account. Sparrman’s observant and well-written narrative of the second voyage contains much that appears nowhere else, emphasising naturally his interests in medicine, health, and natural history.&lt;br&gt;&lt;br&gt;One of 350 numbered copies: a handsomely produced and beautifully illustrated work. @Description = Small folio, wood-engravings in the text; original olive glazed cloth, top edges gilt, a very good copy. London, Golden Cockerel Press, 1944. @Scrap = @SuppressionKey = 00000000-0000-0000-0000-000000000000 @TypeKey = 93f58155-7471-46ad-84c5-262ab9dd37e8 @CatalogueStatusKey = 00000000-0000-0000-0000-000000000003 @CatalogueRevisedDate = @CatalogueRevisedByKey = c4f6fc06-956d-44c4-b393-0d5462cbffec @CatalogueToBeRevisedByKey = 00000000-0000-0000-0000-000000000000 @DontInsure = False @ExtraCosts = @IsWebReady = False @LocationKey = 00000000-0000-0000-0000-000000000000 @LanguageKey = 00000000-0000-0000-0000-000000000000 @CatalogueDescription = @PlacePublished = Golden Cockerel Press @ToDo = @Headline = @DepartmentKey = 529578a3-9189-40de-b656-eef9039d00b8 @Temp1 = @Temp2 = @Temp3 = @Temp4 = v @InternetStatusKey = 00000000-0000-0000-0000-000000000000 @InternetStatusInfo = @PurchaseKey = 00000000-0000-0000-0000-000000000000 @ConsignmentKey = @IsSold = True @RowCreated = 8/04/2010 8:49:16 PM @RowModified = 8/04/2010 8:49:16 PM Suggestions on what is causing the error and/or how to track down what is causing the problem?

    Read the article

  • How to break a Hibernate session?

    - by Péter Török
    In the Hibernate reference, it is stated several times that All exceptions thrown by Hibernate are fatal. This means you have to roll back the database transaction and close the current Session. You aren’t allowed to continue working with a Session that threw an exception. One of our legacy apps uses a single session to update/insert many records from files into a DB table. Each recourd update/insert is done in a separate transaction, which is then duly committed (or rolled back in case an error occurred). Then for the next record a new transaction is opened etc. But the same session is used throughout the whole process, even if a HibernateException was caught in the middle. We are using Oracle 9i btw with Hibernate 3.24.sp1 on JBoss 4.2. Reading the above in the book, I realized that this design may fail. So I refactored the app to use a separate session for each record update. In a unit test with a mock session factory, I could prove that it is now requesting a new session for each record update. So far, so good. However, we found no way to reproduce the session failure while testing the whole app (would this be a stress test btw, or ...?). We thought of shutting down the listener of the DB but we realized that the app is keeping a bunch of connections open to the DB, and the listener would not affect those connections. (This is a web app, activated once every night by a scheduler, but it can also be activated via the browser.) Then we tried to kill some of those connections in the DB while the app was processing updates - this resulted in some failed updates, but then the app happily continued. Apparently Hibernate is clever enough to reopen broken connections under the hood without breaking the whole session. So this might not be a critical issue, as our app seems to be robust enough even in its original form. However, the issue keeps bugging me. I would like to know: Under what circumstances does the Hibernate session really become unusable after a HibernateException was thrown? How to reproduce this in a test? (What's the proper term for such a test?)

    Read the article

  • Motherboard/PSU crippling USB and Sata

    - by celebdor
    I very recently bought a new desktop computer. The motherboard is: Z77MX-D3H and the power supply is ocz zs series 550w. The issue I have is that once I boot to the operating system (I have tried with fedora and Ubuntu with kernels 2.6.38 - 3.4.0), my hard drive (2.5" Magnetic) occasionally makes a power switch noise and it resets. Needless to say, when this drive is the OS drive, the OS crashes. I also have a SSD that works fine with the same OS configurations, but if I have the magnetic hard drive attached as second drive, it works erratically and the reconnects result in corrupted data. I also noticed that whenever I plug an external hard drive USB2.0 or USB3.0 to the computer the issue with the reconnects is even worse: [ 52.198441] sd 7:0:0:0: [sdc] Spinning up disk... [ 57.955811] usb 4-3: USB disconnect, device number 3 [ 58.023687] .ready [ 58.023914] sd 7:0:0:0: [sdc] READ CAPACITY(16) failed [ 58.023919] sd 7:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 58.023932] sd 7:0:0:0: [sdc] Sense not available. [ 58.024061] sd 7:0:0:0: [sdc] READ CAPACITY failed [ 58.024063] sd 7:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 58.024064] sd 7:0:0:0: [sdc] Sense not available. [ 58.024099] sd 7:0:0:0: [sdc] Write Protect is off [ 58.024101] sd 7:0:0:0: [sdc] Mode Sense: 00 00 00 00 [ 58.024135] sd 7:0:0:0: [sdc] Asking for cache data failed [ 58.024137] sd 7:0:0:0: [sdc] Assuming drive cache: write through [ 58.024400] sd 7:0:0:0: [sdc] READ CAPACITY(16) failed [ 58.024402] sd 7:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 58.024405] sd 7:0:0:0: [sdc] Sense not available. [ 58.024448] sd 7:0:0:0: [sdc] READ CAPACITY failed [ 58.024450] sd 7:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 58.024451] sd 7:0:0:0: [sdc] Sense not available. [ 58.024469] sd 7:0:0:0: [sdc] Asking for cache data failed [ 58.024471] sd 7:0:0:0: [sdc] Assuming drive cache: write through [ 58.024472] sd 7:0:0:0: [sdc] Attached SCSI disk [ 58.407725] usb 4-3: new SuperSpeed USB device number 4 using xhci_hcd [ 58.424921] scsi8 : usb-storage 4-3:1.0 [ 59.424185] scsi 8:0:0:0: Direct-Access WD My Passport 0740 1003 PQ: 0 ANSI: 6 [ 59.424406] scsi 8:0:0:1: Enclosure WD SES Device 1003 PQ: 0 ANSI: 6 [ 59.425098] sd 8:0:0:0: Attached scsi generic sg2 type 0 [ 59.425176] ses 8:0:0:1: Attached Enclosure device [ 59.425248] ses 8:0:0:1: Attached scsi generic sg3 type 13 [ 61.845836] sd 8:0:0:0: [sdc] 976707584 512-byte logical blocks: (500 GB/465 GiB) [ 61.845838] sd 8:0:0:0: [sdc] 4096-byte physical blocks [ 61.846336] sd 8:0:0:0: [sdc] Write Protect is off [ 61.846338] sd 8:0:0:0: [sdc] Mode Sense: 47 00 10 08 [ 61.846718] sd 8:0:0:0: [sdc] No Caching mode page present [ 61.846720] sd 8:0:0:0: [sdc] Assuming drive cache: write through [ 61.848105] sd 8:0:0:0: [sdc] No Caching mode page present [ 61.848106] sd 8:0:0:0: [sdc] Assuming drive cache: write through [ 61.857147] sdc: sdc1 [ 61.858915] sd 8:0:0:0: [sdc] No Caching mode page present [ 61.858916] sd 8:0:0:0: [sdc] Assuming drive cache: write through [ 61.858918] sd 8:0:0:0: [sdc] Attached SCSI disk [ 69.875809] usb 4-3: USB disconnect, device number 4 [ 70.275816] usb 4-3: new SuperSpeed USB device number 5 using xhci_hcd [ 70.293063] scsi9 : usb-storage 4-3:1.0 [ 71.292257] scsi 9:0:0:0: Direct-Access WD My Passport 0740 1003 PQ: 0 ANSI: 6 [ 71.292505] scsi 9:0:0:1: Enclosure WD SES Device 1003 PQ: 0 ANSI: 6 [ 71.293527] sd 9:0:0:0: Attached scsi generic sg2 type 0 [ 71.293668] ses 9:0:0:1: Attached Enclosure device [ 71.293758] ses 9:0:0:1: Attached scsi generic sg3 type 13 [ 73.323804] usb 4-3: USB disconnect, device number 5 [ 101.868078] ses 9:0:0:1: Device offlined - not ready after error recovery [ 101.868124] ses 9:0:0:1: Failed to get diagnostic page 0x50000 [ 101.868131] ses 9:0:0:1: Failed to bind enclosure -19 [ 101.868288] sd 9:0:0:0: [sdc] READ CAPACITY(16) failed [ 101.868292] sd 9:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 101.868296] sd 9:0:0:0: [sdc] Sense not available. [ 101.868428] sd 9:0:0:0: [sdc] READ CAPACITY failed [ 101.868434] sd 9:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 101.868439] sd 9:0:0:0: [sdc] Sense not available. [ 101.868468] sd 9:0:0:0: [sdc] Write Protect is off [ 101.868473] sd 9:0:0:0: [sdc] Mode Sense: 00 00 00 00 [ 101.868580] sd 9:0:0:0: [sdc] Asking for cache data failed [ 101.868584] sd 9:0:0:0: [sdc] Assuming drive cache: write through [ 101.868845] sd 9:0:0:0: [sdc] READ CAPACITY(16) failed [ 101.868849] sd 9:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 101.868854] sd 9:0:0:0: [sdc] Sense not available. [ 101.868894] sd 9:0:0:0: [sdc] READ CAPACITY failed [ 101.868898] sd 9:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 101.868903] sd 9:0:0:0: [sdc] Sense not available. [ 101.868961] sd 9:0:0:0: [sdc] Asking for cache data failed [ 101.868966] sd 9:0:0:0: [sdc] Assuming drive cache: write through [ 101.868969] sd 9:0:0:0: [sdc] Attached SCSI disk Now, if I plug the same drive to the powered usb 2.0 hub of my monitor, the issue is not reproduced (at least on a 20h long operation). Also the issue of the usb reconnects is less frequent if the hard drive is plugged before I switch on the computer. Does anybody have some advice as to what I could do? Which is the faulty part/s that I should replace? As for me, I really don't know if to point my finger to the PSU or the Motherboard (I have updated to the latest firmware and checked the BIOS settings several times). EDIT: The reconnects are happening both in the Sata connected drives and the USBX connected drives.

    Read the article

  • dynamically drawing polylines on googlemaps using php/mysql

    - by arc
    Hi. I am new to the googlemaps API. I have written a small app for my mobile phone that periodically updates its location to an SQL databse. I would like to display this information on a googlemap in my browser. Ideally i'd like to then poll the database periodically and if any new co-ords have arrived, add them to the line. Best way of describing it is this; http://tiny.cc/HEIa0 In a quest to get to there, i've started on the documents on google and been modifying them to try and acheive what I want. It doesn't work - and i don't know enough to know why. I would love some advice as to why, and any pointers towards my ultimate goal would be very much welcomed. Google Maps AJAX + MySQL/PHP Example <script type="text/javascript"> //<![CDATA[ function load() { if (GBrowserIsCompatible()) { var map = new GMap2(document.getElementById("map")); map.addControl(new GSmallMapControl()); map.addControl(new GMapTypeControl()); map.setCenter(new GLatLng(47.614495, -122.341861), 13); GDownloadUrl("phpsqlajax_genxml.php", function(data) { var xml = GXml.parse(data); var line = []; var markers = xml.documentElement.getElementsByTagName("points"); for (var i = 0; i < points.length; i++) { var point = points.item(i); var lat = point.getAttribute("lat"); var lng = point.getAttribute("lng"); var latlng = new GLatLng(lat, lng); line.push(latlng); if (point.firstChild) { var station = point.firstChild.nodeValue; var marker = createMarker(latlng, station); map.addOverlay(marker); } } var polyline = new GPolyline(line, "#ff0000", 3, 1); map.addOverlay(polyline); }); } //]]> My php file is generating the following XML; <?xml version="1.0" encoding="UTF-8" ?> <points> <point lng="-122.340141" lat="47.608940"/> <point lng="-122.344391" lat="47.613590"/> <point lng="-122.356445" lat="47.624561"/> <point lng="-122.337654" lat="47.606365"/> <point lng="-122.345673" lat="47.612823"/> <point lng="-122.340363" lat="47.605961"/> <point lng="-122.345467" lat="47.613976"/> <point lng="-122.326584" lat="47.617214"/> <point lng="-122.342834" lat="47.610126"/> </points> I have successfully worked through this; http://code.google.com/apis/maps/articles/phpsqlajax.html before attempting to customise the code. Any pointers? Where am I go wrong?

    Read the article

  • In Exim, is RBL spam rejected prior to being scanned by SpamAssassin?

    - by user955664
    I've recently been battling spam issues on our mail server. One account in particular was getting hammered with incoming spam. SpamAssassin's memory use is one of our concerns. What I've done is enable RBLs in Exim. I now see many rejection notices in the Exim log based on the various RBLs, which is good. However, when I run Eximstats, the numbers seem to be the same as they were prior to the enabling of the RBLs. I am assuming because the email is still logged in some way prior to the rejection. Is that what's happening, or am I missing something else? Does anyone know if these emails are rejected prior to being processed by SpamAssassin? Or does anyone know how I'd be able to find out? Is there a standard way to generate SpamAssassin stats, similar to Eximstats, so that I could compare the numbers? Thank you for your time and any advice. Edit: Here is the ACL section of my Exim configuration file ###################################################################### # ACLs # ###################################################################### begin acl # ACL that is used after the RCPT command check_recipient: # to block certain wellknown exploits, Deny for local domains if # local parts begin with a dot or contain @ % ! / | deny domains = +local_domains local_parts = ^[.] : ^.*[@%!/|] # to restrict port 587 to authenticated users only # see also daemon_smtp_ports above accept hosts = +auth_relay_hosts condition = ${if eq {$interface_port}{587} {yes}{no}} endpass message = relay not permitted, authentication required authenticated = * # allow local users to send outgoing messages using slashes # and vertical bars in their local parts. # Block outgoing local parts that begin with a dot, slash, or vertical # bar but allows them within the local part. # The sequence \..\ is barred. The usage of @ % and ! is barred as # before. The motivation is to prevent your users (or their virii) # from mounting certain kinds of attacks on remote sites. deny domains = !+local_domains local_parts = ^[./|] : ^.*[@%!] : ^.*/\\.\\./ # local source whitelist # accept if the source is local SMTP (i.e. not over TCP/IP). # Test for this by testing for an empty sending host field. accept hosts = : # sender domains whitelist # accept if sender domain is in whitelist accept sender_domains = +whitelist_domains # sender hosts whitelist # accept if sender host is in whitelist accept hosts = +whitelist_hosts accept hosts = +whitelist_hosts_ip # envelope senders whitelist # accept if envelope sender is in whitelist accept senders = +whitelist_senders # accept mail to postmaster in any local domain, regardless of source accept local_parts = postmaster domains = +local_domains # accept mail to abuse in any local domain, regardless of source accept local_parts = abuse domains = +local_domains # accept mail to hostmaster in any local domain, regardless of source accept local_parts = hostmaster domains =+local_domains # OPTIONAL MODIFICATIONS: # If the page you're using to notify senders of blocked email of how # to get their address unblocked will use a web form to send you email so # you'll know to unblock those senders, then you may leave these lines # commented out. However, if you'll be telling your senders of blocked # email to send an email to [email protected], then you should # replace "errors" with the left side of the email address you'll be # using, and "example.com" with the right side of the email address and # then uncomment the second two lines, leaving the first one commented. # Doing this will mean anyone can send email to this specific address, # even if they're at a blocked domain, and even if your domain is using # blocklists. # accept mail to [email protected], regardless of source # accept local_parts = errors # domains = example.com # deny so-called "legal" spammers" deny message = Email blocked by LBL - to unblock see http://www.example.com/ # only for domains that do want to be tested against RBLs domains = +use_rbl_domains sender_domains = +blacklist_domains # deny using hostname in bad_sender_hosts blacklist deny message = Email blocked by BSHL - to unblock see http://www.example.com/ # only for domains that do want to be tested against RBLs domains = +use_rbl_domains hosts = +bad_sender_hosts # deny using IP in bad_sender_hosts blacklist deny message = Email blocked by BSHL - to unblock see http://www.example.com/ # only for domains that do want to be tested against RBLs domains = +use_rbl_domains hosts = +bad_sender_hosts_ip # deny using email address in blacklist_senders deny message = Email blocked by BSAL - to unblock see http://www.example.com/ domains = +use_rbl_domains senders = +blacklist_senders # By default we do NOT require sender verification. # Sender verification denies unless sender address can be verified: # If you want to require sender verification, i.e., that the sending # address is routable and mail can be delivered to it, then # uncomment the next line. If you do not want to require sender # verification, leave the line commented out #require verify = sender # deny using .spamhaus deny message = Email blocked by SPAMHAUS - to unblock see http://www.example.com/ # only for domains that do want to be tested against RBLs domains = +use_rbl_domains dnslists = sbl.spamhaus.org # deny using ordb # deny message = Email blocked by ORDB - to unblock see http://www.example.com/ # # only for domains that do want to be tested against RBLs # domains = +use_rbl_domains # dnslists = relays.ordb.org # deny using sorbs smtp list deny message = Email blocked by SORBS - to unblock see http://www.example.com/ # only for domains that do want to be tested against RBLs domains = +use_rbl_domains dnslists = dnsbl.sorbs.net=127.0.0.5 # Next deny stuff from more "fuzzy" blacklists # but do bypass all checking for whitelisted host names # and for authenticated users # deny using spamcop deny message = Email blocked by SPAMCOP - to unblock see http://www.example.com/ hosts = !+relay_hosts domains = +use_rbl_domains !authenticated = * dnslists = bl.spamcop.net # deny using njabl deny message = Email blocked by NJABL - to unblock see http://www.example.com/ hosts = !+relay_hosts domains = +use_rbl_domains !authenticated = * dnslists = dnsbl.njabl.org # deny using cbl deny message = Email blocked by CBL - to unblock see http://www.example.com/ hosts = !+relay_hosts domains = +use_rbl_domains !authenticated = * dnslists = cbl.abuseat.org # deny using all other sorbs ip-based blocklist besides smtp list deny message = Email blocked by SORBS - to unblock see http://www.example.com/ hosts = !+relay_hosts domains = +use_rbl_domains !authenticated = * dnslists = dnsbl.sorbs.net!=127.0.0.6 # deny using sorbs name based list deny message = Email blocked by SORBS - to unblock see http://www.example.com/ domains =+use_rbl_domains # rhsbl list is name based dnslists = rhsbl.sorbs.net/$sender_address_domain # accept if address is in a local domain as long as recipient can be verified accept domains = +local_domains endpass message = "Unknown User" verify = recipient # accept if address is in a domain for which we relay as long as recipient # can be verified accept domains = +relay_domains endpass verify=recipient # accept if message comes for a host for which we are an outgoing relay # recipient verification is omitted because many MUA clients don't cope # well with SMTP error responses. If you are actually relaying from MTAs # then you should probably add recipient verify here accept hosts = +relay_hosts accept hosts = +auth_relay_hosts endpass message = authentication required authenticated = * deny message = relay not permitted # default at end of acl causes a "deny", but line below will give # an explicit error message: deny message = relay not permitted # ACL that is used after the DATA command check_message: accept

    Read the article

  • Build path issues learning Guice

    - by Preston
    I can't figure out why I'm getting this error below I have included all the appropriate jars as far as I can tell(I have included eclipses .classpath file below.) All of the classpath entries resolve just fine. What am I missing? The type javax.servlet.ServletContextListener cannot be resolved. It is indirectly referenced from required .class files on the "extends GuiceServletContextListener" line - import com.google.inject.Guice; import com.google.inject.Injector; import com.google.inject.servlet.GuiceServletContextListener; import com.google.inject.servlet.ServletModule; public class ServletConfig extends GuiceServletContextListener { @Override protected Injector getInjector() { return Guice.createInjector(new ServletModule(){ @Override protected void configureServlets() { // TODO: add necessary code to bind } }); } } .Classpath <?xml version="1.0" encoding="UTF-8"?> <classpath> <classpathentry kind="src" path="src"/> <classpathentry kind="con" path="org.eclipse.jdt.launching.JRE_CONTAINER/org.eclipse.jdt.internal.debug.ui.launcher.StandardVMType/jdk1.7.0_21"> <attributes> <attribute name="owner.project.facets" value="java"/> </attributes> </classpathentry> <classpathentry kind="con" path="oracle.eclipse.tools.glassfish.lib.system"> <attributes> <attribute name="owner.project.facets" value="jst.web"/> </attributes> </classpathentry> <classpathentry kind="con" path="org.eclipse.jst.j2ee.internal.web.container"/> <classpathentry kind="con" path="org.eclipse.jst.j2ee.internal.module.container"/> <classpathentry kind="lib" path="guice-3.0/aopalliance.jar"/> <classpathentry kind="lib" path="guice-3.0/guice-3.0.jar"/> <classpathentry kind="lib" path="guice-3.0/guice-servlet-3.0.jar"/> <classpathentry kind="lib" path="guice-3.0/javax.inject.jar"/> <classpathentry kind="output" path="build/classes"/> </classpath>

    Read the article

  • Getting the constructor of an Interface Type through reflection, is there a better approach than loo

    - by Will Marcouiller
    I have written a generic type: IDirectorySource<T> where T : IDirectoryEntry, which I'm using to manage Active Directory entries through my interfaces objects: IGroup, IOrganizationalUnit, IUser. So that I can write the following: IDirectorySource<IGroup> groups = new DirectorySource<IGroup>(); // Where IGroup implements `IDirectoryEntry`, of course.` foreach (IGroup g in groups.ToList()) { listView1.Items.Add(g.Name).SubItems.Add(g.Description); } From the IDirectorySource<T>.ToList() methods, I use reflection to find out the appropriate constructor for the type parameter T. However, since T is given an interface type, it cannot find any constructor at all! Of course, I have an internal class Group : IGroup which implements the IGroup interface. No matter how hard I have tried, I can't figure out how to get the constructor out of my interface through my implementing class. [DirectorySchemaAttribute("group")] public interface IGroup { } internal class Group : IGroup { internal Group(DirectoryEntry entry) { NativeEntry = entry; Domain = NativeEntry.Path; } // Implementing IGroup interface... } Within the ToList() method of my IDirectorySource<T> interface implementation, I look for the constructor of T as follows: internal class DirectorySource<T> : IDirectorySource<T> { // Implementing properties... // Methods implementations... public IList<T> ToList() { Type t = typeof(T) // Let's assume we're always working with the IGroup interface as T here to keep it simple. // So, my `DirectorySchema` property is already set to "group". // My `DirectorySearcher` is already instantiated here, as I do it within the DirectorySource<T> constructor. Searcher.Filter = string.Format("(&(objectClass={0}))", DirectorySchema) ConstructorInfo ctor = null; ParameterInfo[] params = null; // This is where I get stuck for now... Please see the helper method. GetConstructor(out ctor, out params, new Type() { DirectoryEntry }); SearchResultCollection results = null; try { results = Searcher.FindAll(); } catch (DirectoryServicesCOMException ex) { // Handling exception here... } foreach (SearchResult entry in results) entities.Add(ctor.Invoke(new object() { entry.GetDirectoryEntry() })); return entities; } } private void GetConstructor(out ConstructorInfo constructor, out ParameterInfo[] parameters, Type paramsTypes) { Type t = typeof(T); ConstructorInfo[] ctors = t.GetConstructors(BindingFlags.CreateInstance | BindingFlags.NonPublic | BindingFlags.Public | BindingFlags.InvokeMethod); bool found = true; foreach (ContructorInfo c in ctors) { parameters = c.GetParameters(); if (parameters.GetLength(0) == paramsTypes.GetLength(0)) { for (int index = 0; index < parameters.GetLength(0); ++index) { if (!(parameters[index].GetType() is paramsTypes[index].GetType())) found = false; } if (found) { constructor = c; return; } } } // Processing constructor not found message here... } My problem is that T will always be an interface, so it never finds a constructor. Is there a better way than looping through all of my assembly types for implementations of my interface? I don't care about rewriting a piece of my code, I want to do it right on the first place so that I won't need to come back again and again and again. EDIT #1 Following Sam's advice, I will for now go with the IName and Name convention. However, is it me or there's some way to improve my code? Thanks! =)

    Read the article

  • Use QuickCheck by generating primes

    - by Dan
    Background For fun, I'm trying to write a property for quick-check that can test the basic idea behind cryptography with RSA. Choose two distinct primes, p and q. Let N = p*q e is some number relatively prime to (p-1)(q-1) (in practice, e is usually 3 for fast encoding) d is the modular inverse of e modulo (p-1)(q-1) For all x such that 1 < x < N, it is always true that (x^e)^d = x modulo N In other words, x is the "message", raising it to the eth power mod N is the act of "encoding" the message, and raising the encoded message to the dth power mod N is the act of "decoding" it. (The property is also trivially true for x = 1, a case which is its own encryption) Code Here are the methods I have coded up so far: import Test.QuickCheck -- modular exponentiation modExp :: Integral a => a -> a -> a -> a modExp y z n = modExp' (y `mod` n) z `mod` n where modExp' y z | z == 0 = 1 | even z = modExp (y*y) (z `div` 2) n | odd z = (modExp (y*y) (z `div` 2) n) * y -- relatively prime rPrime :: Integral a => a -> a -> Bool rPrime a b = gcd a b == 1 -- multiplicative inverse (modular) mInverse :: Integral a => a -> a -> a mInverse 1 _ = 1 mInverse x y = (n * y + 1) `div` x where n = x - mInverse (y `mod` x) x -- just a quick way to test for primality n `divides` x = x `mod` n == 0 primes = 2:filter isPrime [3..] isPrime x = null . filter (`divides` x) $ takeWhile (\y -> y*y <= x) primes -- the property prop_rsa (p,q,x) = isPrime p && isPrime q && p /= q && x > 1 && x < n && rPrime e t ==> x == (x `powModN` e) `powModN` d where e = 3 n = p*q t = (p-1)*(q-1) d = mInverse e t a `powModN` b = modExp a b n (Thanks, google and random blog, for the implementation of modular multiplicative inverse) Question The problem should be obvious: there are way too many conditions on the property to make it at all usable. Trying to invoke quickCheck prop_rsa in ghci made my terminal hang. So I've poked around the QuickCheck manual a bit, and it says: Properties may take the form forAll <generator> $ \<pattern> -> <property> How do I make a <generator> for prime numbers? Or with the other constraints, so that quickCheck doesn't have to sift through a bunch of failed conditions? Any other general advice (especially regarding QuickCheck) is welcome.

    Read the article

  • VBA nested Loop flow control

    - by PCGIZMO
    I will be brief and stick to what I know. This code for the most part works as it should. The only issue is in the iteration of the x and z loop. these to loops should set the range and yLABEL for the Y loop. I can get through a set and come up with the correct range after that things go bonkers. I know some of it has to do with not breaking out of x to set z and then back to x update the range. It should work z is found then x. the range between them is set for y. then next x but y stays then rang between y and x is set for y.. so on and so forth kinda like a slinky down the stairs. or a slide rule depending on how I set the loops either way I end up all over the place after a couple iterations. I have done a few things but each time I break out of x to set z , X restarts at the top of the range. At least that's what I think I am seeing. In the example sheet i have since changed the way the way the offset works with the loop but the idea is still the same. I have goto statements at this time i was going to try figuring out conditional switches after the loops were working. Any help direction or advice is appreciated. Option Explicit Sub parse() Application.DisplayAlerts = False 'Application.EnableCancelKey = xlDisabled Dim strPath As String, strPathused As String strPath = "C:\clerk plan2" Dim objfso As FileSystemObject, objFolder As Folder, objfile As Object Set objfso = CreateObject("Scripting.FileSystemObject") Set objFolder = objfso.GetFolder(strPath) 'Loop through objWorkBooks For Each objfile In objFolder.Files If objfso.GetExtensionName(objfile.Path) = "xlsx" Then Dim objWorkbook As Workbook Set objWorkbook = Workbooks.Open(objfile.Path) ' Set path for move to at end of script strPathused = "C:\prodplan\used\" & objWorkbook.Name objWorkbook.Worksheets("inbound transfer sheet").Activate objWorkbook.Worksheets("inbound transfer sheet").Cells.UnMerge 'Range management WB Dim SRCwb As Worksheet, SRCrange1 As Range, SRCrange2 As Range, lastrow As Range Set SRCwb = objWorkbook.Worksheets("inbound transfer sheet") Set SRCrange1 = SRCwb.Range("g3:g150") Set SRCrange2 = SRCwb.Range("a1:a150") Dim DSTws As Worksheet Set DSTws = Workbooks("clerkplan2.xlsm").Worksheets("transfer") Dim STR1 As String, STR2 As String, xVAL As String, zVAL As String, xSTR As String, zSTR As String STR1 = "INBOUND TRANS" STR2 = "INBOUND CA TRANS" Dim x As Variant, z As Variant, y As Variant, zxRANGE As Range For Each z In SRCrange2 zSTR = Mid(z, 1, 16) If zSTR <> STR2 Then GoTo zNEXT If zSTR = STR2 Then zVAL = z End If For Each x In SRCrange2 xSTR = Mid(x, 1, 13) If xSTR <> STR1 Then GoTo xNEXT If xSTR = STR1 Then xVAL = x End If Dim yLABEL As String If xVAL = x And zVAL = z Then If x.Row > z.Row Then Set zxRANGE = SRCwb.Range(x.Offset(1, 0).Address & " : " & z.Offset(-1, 0).Address) yLABEL = z.Value Else Set zxRANGE = SRCwb.Range(z.Offset(-1, 0).Address & " : " & x.Offset(1, 0).Address) yLABEL = x.Value End If End If MsgBox zxRANGE.Address ' DEBUG For Each y In zxRANGE If y.Offset(0, 6) = "Temp" Or y.Offset(0, 14) = "Begin Time" Or y.Offset(0, 15) = "End Time" Or _ Len(y.Offset(0, 6)) = 0 Or Len(y.Offset(0, 14)) = 0 Or Len(y.Offset(0, 15)) = "0" Then yNEXT Set lastrow = Workbooks("clerkplan2.xlsm").Worksheets("transfer").Range("c" & DSTws.Rows.Count).End(xlUp).Offset(1, 0) y.Offset(0, 6).Copy lastrow.PasteSpecial Paste:=xlPasteValues, operation:=xlNone, skipblanks:=True, Transpose:=False DSTws.Activate ActiveCell.Offset(0, -1) = objWorkbook.Name ActiveCell.Offset(0, -2) = yLABEL objWorkbook.Activate y.Offset(0, 14).Copy Set lastrow = Workbooks("clerkplan2.xlsm").Worksheets("transfer").Range("d" & DSTws.Rows.Count).End(xlUp).Offset(1, 0) lastrow.PasteSpecial Paste:=xlPasteValues, operation:=xlNone, skipblanks:=True, Transpose:=False objWorkbook.Activate y.Offset(0, 15).Copy Set lastrow = Workbooks("clerkplan2.xlsm").Worksheets("transfer").Range("e" & DSTws.Rows.Count).End(xlUp).Offset(1, 0) lastrow.PasteSpecial Paste:=xlPasteValues, operation:=xlNone, skipblanks:=True, Transpose:=False yNEXT: Next y xNEXT: Next x zNEXT: Next z strPathused = "C:\clerk plan2\used\" & objWorkbook.Name objWorkbook.Close False 'Move proccesed file to new Dir Dim OldFilePath As String Dim NewFilePath As String OldFilePath = objfile 'original file location NewFilePath = strPathused ' new file location Name OldFilePath As NewFilePath ' move the file End If Next End Sub

    Read the article

  • Complex relationship between tables in NHibernate

    - by Ilya Kogan
    Hi all, I'm writing a Fluent NHibernate mapping for a legacy Oracle database. The challenge is that the tables have composite primary keys. If I were at total freedom, I would redesign the relationships and auto-generate primary keys, but other applications must write to the same database and read from it, so I cannot do it. These are the two tables I'll focus on: Example data Trips table: 1, 10:00, 11:00 ... 1, 12:00, 15:00 ... 1, 16:00, 19:00 ... 2, 12:00, 13:00 ... 3, 9:00, 18:00 ... Faults table: 1, 13:00 ... 1, 23:00 ... 2, 12:30 ... In this case, vehicle 1 made three trips and has two faults. The first fault happened during the second trip, and the second fault happened while the vehicle was resting. Vehicle 2 had one trip, during which a fault happened. Constraints Trips of the same vehicle never overlap. So the tables have an optional one-to-many relationship, because every fault either happens during a trip or it doesn't. If I wanted to join them in SQL, I would write: select ... from Faults left outer join Trips on Faults.VehicleId = Trips.VehicleId and Faults.FaultTime between Trips.TripStartTime and Trips.TripEndTime and then I'd get a dataset where every fault appears exactly once (one-to-many as I said). Note that there is no Vehicles table, and I don't need one. But I did create a view that contains all VehicleIds from both tables, so I can use it as a junction table. What am I actually looking for? The tables are huge because they cover years of data, and every time I only need to fetch a range of a few hours. So I need a mapping and a criteria that will run something like the following SQL underneath: select ... from Faults left outer join Trips on Faults.VehicleId = Trips.VehicleId and Faults.FaultTime between Trips.TripStartTime and Trips.TripEndTime where Faults.FaultTime between :p0 and :p1 Do you have any ideas how to achieve it? Note 1: Currently the application shouldn't write to the database, so persistence is not a must, although if the mapping supports persistence, it may help at some point in the future. Note 2: I know it's a tough one, so if you give me a great answer, you will be properly rewarded :) Thank you for reading this long question, and now I only hope for the best :)

    Read the article

  • Best Practices / Patterns for Enterprise Protection/Remediation of SSNs (Social Security Numbers)

    - by Erik Neu
    I am interested in hearing about enterprise solutions for SSN handling. (I looked pretty hard for any pre-existing post on SO, including reviewing the terriffic SO automated "Related Questions" list, and did not find anything, so hopefully this is not a repeat.) First, I think it is important to enumerate the reasons systems/databases use SSNs: (note—these are reasons for de facto current state—I understand that many of them are not good reasons) Required for Interaction with External Entities. This is the most valid case—where external entities your system interfaces with require an SSN. This would typically be government, tax and financial. SSN is used to ensure system-wide uniqueness. SSN has become the default foreign key used internally within the enterprise, to perform cross-system joins. SSN is used for user authentication (e.g., log-on) The enterprise solution that seems optimum to me is to create a single SSN repository that is accessed by all applications needing to look up SSN info. This repository substitutes a globally unique, random 9-digit number (ASN) for the true SSN. I see many benefits to this approach. First of all, it is obviously highly backwards-compatible—all your systems "just" have to go through a major, synchronized, one-time data-cleansing exercise, where they replace the real SSN with the alternate ASN. Also, it is centralized, so it minimizes the scope for inspection and compliance. (Obviously, as a negative, it also creates a single point of failure.) This approach would solve issues 2 and 3, without ever requiring lookups to get the real SSN. For issue #1, authorized systems could provide an ASN, and be returned the real SSN. This would of course be done over secure connections, and the requesting systems would never persist the full SSN. Also, if the requesting system only needs the last 4 digits of the SSN, then that is all that would ever be passed. Issue #4 could be handled the same way as issue #1, though obviously the best thing would be to move away from having users supply an SSN for log-on. There are a couple of papers on this: UC Berkely: http://bit.ly/bdZPjQ Oracle Vault: bit.ly/cikbi1

    Read the article

  • Flex 4: Traversing the Stage More Easily

    - by Steve
    The following is a MXML Module I am producing in Flex 4: <?xml version="1.0" encoding="utf-8"?> <mx:Module xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" creationComplete="init()" layout="absolute" width="100%" height="100%"> <fx:Declarations> <!-- Place non-visual elements (e.g., services, value objects) here --> </fx:Declarations> <fx:Style source="BMChartModule.css" /> <s:Panel id="panel" title="Benchmark Results" height="100%" width="100%" dropShadowVisible="false"> <mx:TabNavigator id="tn" height="100%" width="100%" /> </s:Panel> <fx:Script> <![CDATA[ import flash.events.Event; import mx.charts.ColumnChart; import mx.charts.effects.SeriesInterpolate; import mx.controls.Alert; import spark.components.BorderContainer; import spark.components.Button; import spark.components.Label; import spark.components.NavigatorContent; import spark.components.RadioButton; import spark.components.TextInput; import spark.layouts.*; private var xml:XML; private function init():void { var seriesInterpolate:SeriesInterpolate = new SeriesInterpolate(); seriesInterpolate.duration = 1000; xml = parentApplication.model.xml; var sectorList:XMLList = xml.SECTOR; for each(var i:XML in sectorList) { var ncLayout:HorizontalLayout = new HorizontalLayout(); var nc:NavigatorContent = new NavigatorContent(); nc.label = i.@NAME; nc.name = "NC_" + nc.label; nc.layout = ncLayout; tn.addElement(nc); var cC:ColumnChart = new ColumnChart(); cC.percentWidth = 70; cC.name = "CC"; nc.addElement(cC); var bClayout:VerticalLayout = new VerticalLayout(); var bC:BorderContainer = new BorderContainer(); bC.percentWidth = 30; bC.layout = bClayout; nc.addElement(bC); var bClabel:Label = new Label(); bClabel.percentWidth = 100; bClabel.text = "Select a graph to view it in the column chart:"; var dpList:XMLList = sectorList.(@NAME == i.@NAME).DATAPOINT; for each(var j:XML in dpList) { var rB:RadioButton = new RadioButton(); rB.groupName = "dp"; rB.label = j.@NAME; rB.addEventListener(MouseEvent.CLICK, rBclick); bC.addElement(rB); } } } private function rBclick(e:MouseEvent):void { var selectedTab:NavigatorContent = this.tn.selectedChild as NavigatorContent; var colChart:ColumnChart = selectedTab.getChildByName("CC") as ColumnChart; trace(selectedTab.getChildAt(0)); } ]]> </fx:Script> </mx:Module> I'm writing this function rBclick to redraw the column chart when a radio button is clicked. In order to do this I need to find the column chart on the stage using actionscript. I've currently got 3 lines of code in here to do this: var selectedTab:NavigatorContent = this.tn.selectedChild as NavigatorContent; var colChart:ColumnChart = selectedTab.getChildByName("CC") as ColumnChart; trace(selectedTab.getChildAt(0)); Getting to the active tab in the tabnavigator is easy enough, but then selectedTab.getChildAt(0) - which I was expecting to be the chart - is a "spark.skin.spark.SkinnableContainerSkin"...anyway, I can continue to traverse the tree using this somewhat annoying code, but I'm hoping there is an easier way. So in short, at run time I want to, with as little code as possible, identify the column chart in the active tab so I can redraw it. Any advice would be greatly appreciated.

    Read the article

  • C++/msvc6 application crashes due to heap corruption, any hints?

    - by David Alfonso
    Hello all, let me say first that I'm writing this question after months of trying to find out the root of a crash happening in our application. I'll try to detail as much as possible what I've already found out about it. About the application It runs on Windows XP Professional SP2. It's built with Microsoft Visual C++ 6.0 with Service Pack 6. It's MFC based. It uses several external dlls (e.g. Xerces, ZLib or ACE). It has high performance requirements. It does a lot of network and hard disk I/O, but it's also cpu intensive. It has an exception handling mechanism which generates a minidump when an unhandled exception occurs. Facts about the crash It only happens on multiprocessor/multicore machines and under heavy loads of work. It happens at random (neither we nor our client have found a pattern yet). We cannot reproduce the crash on our testing lab. It only happens on some production systems (but always in multicore machines) It always ends up crashing at the same point, although the complete stack is not always the same. Let me add the stack of the crashing thread (obtained using WinDbg, sorry we don't have symbols) ChildEBP RetAddr Args to Child WARNING: Stack unwind information not available. Following frames may be wrong. 030af6c8 7c9206eb 77bfc3c9 01a80000 00224bc3 MyApplication+0x2a85b9 030af960 7c91e9c0 7c92901b 00000ab4 00000000 ntdll!RtlAllocateHeap+0xeac (FPO: [Non-Fpo]) 030af98c 7c9205c8 00000001 00000000 00000000 ntdll!ZwWaitForSingleObject+0xc (FPO: [3,0,0]) 030af9c0 7c920551 01a80898 7c92056d 313adfb0 ntdll!RtlpFreeToHeapLookaside+0x22 (FPO: [2,0,4]) 030afa8c 4ba3ae96 000307da 00130005 00040012 ntdll!RtlFreeHeap+0x1e9 (FPO: [Non-Fpo]) 030afacc 77bfc2e3 0214e384 3087c8d8 02151030 0x4ba3ae96 030afb00 7c91e306 7c80bfc1 00000948 00000001 msvcrt!free+0xc8 (FPO: [Non-Fpo]) 030afb20 0042965b 030afcc0 0214d780 02151218 ntdll!ZwReleaseSemaphore+0xc (FPO: [3,0,0]) 030afb7c 7c9206eb 02e6c471 02ea0000 00000008 MyApplication+0x2965b 030afe60 7c9205c8 02151248 030aff38 7c920551 ntdll!RtlAllocateHeap+0xeac (FPO: [Non-Fpo]) 030afe74 7c92056d 0210bfb8 02151250 02151250 ntdll!RtlpFreeToHeapLookaside+0x22 (FPO: [2,0,4]) 030aff38 77bfc2de 01a80000 00000000 77bfc2e3 ntdll!RtlFreeHeap+0x647 (FPO: [Non-Fpo]) 7c92056d c5ffffff ce7c94be ff7c94be 00ffffff msvcrt!free+0xc3 (FPO: [Non-Fpo]) 7c920575 ff7c94be 00ffffff 12000000 907c94be 0xc5ffffff 7c920579 00ffffff 12000000 907c94be 90909090 0xff7c94be *** WARNING: Unable to verify checksum for xerces-c_2_7.dll *** ERROR: Symbol file could not be found. Defaulted to export symbols for xerces-c_2_7.dll - 7c92057d 12000000 907c94be 90909090 8b55ff8b MyApplication+0xbfffff 7c920581 907c94be 90909090 8b55ff8b 08458bec xerces_c_2_7 7c920585 90909090 8b55ff8b 08458bec 04408b66 0x907c94be 7c920589 8b55ff8b 08458bec 04408b66 0004c25d 0x90909090 7c92058d 08458bec 04408b66 0004c25d 90909090 0x8b55ff8b The address MyApplication+0x2a85b9 corresponds to a call to erase() of a std::list. What I have tried so far Reviewing all the code related to the point where the crash ends happening. Trying to enable pageheap on our testing lab though nothing useful has been found by now. We have substituted the std::list for a C array and then it crashes in other part of the code (although it is related code, it's not in the code where the old list resided). Coincidentally, now it crashes in another erase, though this time of a std::multiset. Let me copy the stack contained in the dump: ntdll.dll!_RtlpCoalesceFreeBlocks@16() + 0x124e bytes ntdll.dll!_RtlFreeHeap@12() + 0x91f bytes msvcrt.dll!_free() + 0xc3 bytes MyApplication.exe!006a4fda() [Frames below may be incorrect and/or missing, no symbols loaded for MyApplication.exe] MyApplication.exe!0069f305() ntdll.dll!_NtFreeVirtualMemory@16() + 0xc bytes ntdll.dll!_RtlpSecMemFreeVirtualMemory@16() + 0x1b bytes ntdll.dll!_ZwWaitForSingleObject@12() + 0xc bytes ntdll.dll!_RtlpFreeToHeapLookaside@8() + 0x26 bytes ntdll.dll!_RtlFreeHeap@12() + 0x114 bytes msvcrt.dll!_free() + 0xc3 bytes c5ffffff() Possible solutions (that I'm aware of) which cannot be applied "Migrate the application to a newer compiler": We are working on this but It's not a solution at the moment. "Enable pageheap (normal or full)": We can't enable pageheap on production machines as this affects performance heavily. I think that's all I remember now, if I have forgotten something I'll add it asap. If you can give me some hint or propose some possible solution, don't hesitate to answer! Thank you in advance for your time and advice.

    Read the article

  • Cannot Logout of Facebook with Facebook C# SDK

    - by Ryan Smyth
    I think I've read just about everything out there on the topic of logging out of Facebook inside of a Desktop application. Nothing so far works. Specifically, I would like to log the user out so that they can switch identities, e.g. People sharing a computer at home could then use the software with their own Facebook accounts, but with no chance to switch accounts, it's quite messy. (Have not yet tested switching Windows users accounts as that is simply far too much to ask of the end user and should not be necessary.) Now, I should say that I have set the application to use these permissions: string[] permissions = new string[] { "user_photos", "publish_stream", "offline_access" }; So, "offline_access" is included there. I do not know if this does/should affect logging out or not. Again, my purpose for logging out is merely to switch users. (If there's a better approach, please let me know.) The purported solutions seem to be: Use the JavaScript SDK (FB.logout()) Use "m.facebook.com" instead Create your own URL (and possibly use m.facebook.com) Create your own URL and use the session variable (in ASP.NET) The first is kind of silly. Why resort to JavaScript when you're using C#? It's kind of a step backwards and has a lot of additional overhead in a desktop application. (I have not tried this as it's simply disgustingly messy to do this in a desktop application.) If anyone can confirm that this is the only working method, please do so. I'm desperately trying to avoid it. The second doesn't work. Perhaps it worked in the past, but my umpteen attempts to get it to work have all failed. The third doesn't work. I've tried umpteen dozen variations with zero success. The last option there doesn't work for a desktop application because it's not ASP.NET and you don't have a session variable to work with. The Facebook C# SDK logout also no longer works. i.e. public FacebookLoginDialog(string appId, string[] extendedPermissions, bool logout) { IDictionary<string, object> loginParameters = new Dictionary<string, object> { { "response_type", "token" }, { "display", "popup" } }; _navigateUri = FacebookOAuthClient.GetLoginUrl(appId, null, extendedPermissions, logout, loginParameters); InitializeComponent(); } I remember it working in the past, but it no longer works now. (Which truly puzzles me...) It instead now directs the user to the Facebook mobile page, where the user must manually logout. Now, I could do browser automation to automatically click the logout link for the user, however, this is prone to breaking if Facebook updates the mobile UI. It is also messy, and possibly a worse solution than trying to use the JavaScript SDK FB.logout() method (though not by much). I have searched for some kind of documentation, however, I cannot find anything in the Facebook developer documentation that illustrates how to logout an application. Has anyone solved this problem, or seen any documentation that can be ported to work with the Facebook C# SDK? I am certainly open to using a WebClient or HttpClient/Response if anyone can point to some documentation that could work with it. I simply have not been able to find any low-level documentation that shows how this approach could work. Thank you in advance for any advice, pointers, or links.

    Read the article

  • NHibernate (3.1.0.4000) NullReferenceException using Query<> and NHibernate Facility

    - by TigerShark
    I have a problem with NHibernate, I can't seem to find any solution for. In my project I have a simple entity (Batch), but whenever I try and run the following test, I get an exception. I've triede a couple of different ways to perform a similar query, but almost identical exception for all (it differs in which LINQ method being executed). The first test: [Test] public void QueryLatestBatch() { using (var session = SessionManager.OpenSession()) { var batch = session.Query<Batch>() .FirstOrDefault(); Assert.That(batch, Is.Not.Null); } } The exception: System.NullReferenceException : Object reference not set to an instance of an object. at NHibernate.Linq.NhQueryProvider.PrepareQuery(Expression expression, ref IQuery query, ref NhLinqExpression nhQuery) at NHibernate.Linq.NhQueryProvider.Execute(Expression expression) at System.Linq.Queryable.FirstOrDefault(IQueryable`1 source) The second test: [Test] public void QueryLatestBatch2() { using (var session = SessionManager.OpenSession()) { var batch = session.Query<Batch>() .OrderBy(x => x.Executed) .Take(1) .SingleOrDefault(); Assert.That(batch, Is.Not.Null); } } The exception: System.NullReferenceException : Object reference not set to an instance of an object. at NHibernate.Linq.NhQueryProvider.PrepareQuery(Expression expression, ref IQuery query, ref NhLinqExpression nhQuery) at NHibernate.Linq.NhQueryProvider.Execute(Expression expression) at System.Linq.Queryable.SingleOrDefault(IQueryable`1 source) However, this one is passing (using QueryOver<): [Test] public void QueryOverLatestBatch() { using (var session = SessionManager.OpenSession()) { var batch = session.QueryOver<Batch>() .OrderBy(x => x.Executed).Asc .Take(1) .SingleOrDefault(); Assert.That(batch, Is.Not.Null); Assert.That(batch.Executed, Is.LessThan(DateTime.Now)); } } Using the QueryOver< API is not bad at all, but I'm just kind of baffled that the Query< API isn't working, which is kind of sad, since the First() operation is very concise, and our developers really enjoy LINQ. I really hope there is a solution to this, as it seems strange if these methods are failing such a simple test. EDIT I'm using Oracle 11g, my mappings are done with FluentNHibernate registered through Castle Windsor with the NHibernate Facility. As I wrote, the odd thing is that the query works perfectly with the QueryOver< API, but not through LINQ.

    Read the article

< Previous Page | 818 819 820 821 822 823 824 825 826 827 828 829  | Next Page >