Search Results

Search found 15591 results on 624 pages for 'problems'.

Page 506/624 | < Previous Page | 502 503 504 505 506 507 508 509 510 511 512 513  | Next Page >

  • DataSets to POCOs - an inquiry regarding DAL architecture

    - by alexsome
    Hello all, I have to develop a fairly large ASP.NET MVC project very quickly and I would like to get some opinions on my DAL design to make sure nothing will come back to bite me since the BL is likely to get pretty complex. A bit of background: I am working with an Oracle backend so the built-in LINQ to SQL is out; I also need to use production-level libraries so the Oracle EF provider project is out; finally, I am unable to use any GPL or LGPL code (Apache, MS-PL, BSD are okay) so NHibernate/Castle Project are out. I would prefer - if at all possible - to avoid dishing out money but I am more concerned about implementing the right solution. To summarize, there are my requirements: Oracle backend Rapid development (L)GPL-free Free I'm reasonably happy with DataSets but I would benefit from using POCOs as an intermediary between DataSets and views. Who knows, maybe at some point another DAL solution will show up and I will get the time to switch it out (yeah, right). So, while I could use LINQ to convert my DataSets to IQueryable, I would like to have a generic solution so I don't have to write a custom query for each class. I'm tinkering with reflection right now, but in the meantime I have two questions: Are there any problems I overlooked with this solution? Are there any other approaches you would recommend to convert DataSets to POCOs? Thanks in advance.

    Read the article

  • OpenGL Calls Lock/Freeze

    - by Necrolis
    I am using some dell workstations(running WinXP Pro SP 2 & DeepFreeze) for development, but something was recenlty loaded onto these machines that prevents any opengl call(the call locks) from completing(and I know the code works as I have tested it on 'clean' machines, I also tested with simple opengl apps generated by dev-cpp, which will also lock on the dell machines). I have tried to debug my own apps to see where exactly the gl calls freeze, but there is some global system hook on ZwQueryInformationProcess that messes up calls to ZwQueryInformationThread(used by ExitThread), preventing me from debugging at all(it causes the debugger, OllyDBG, to go into an access violation reporting loop or the program to crash if the exception is passed along). the hook: ntdll.ZwQueryInformationProcess 7C90D7E0 B8 9A000000 MOV EAX,9A 7C90D7E5 BA 0003FE7F MOV EDX,7FFE0300 7C90D7EA FF12 CALL DWORD PTR DS:[EDX] 7C90D7EC - E9 0F28448D JMP 09D50000 7C90D7F1 9B WAIT 7C90D7F2 0000 ADD BYTE PTR DS:[EAX],AL 7C90D7F4 00BA 0003FE7F ADD BYTE PTR DS:[EDX+7FFE0300],BH 7C90D7FA FF12 CALL DWORD PTR DS:[EDX] 7C90D7FC C2 1400 RETN 14 7C90D7FF 90 NOP ntdll.ZwQueryInformationToken 7C90D800 B8 9C000000 MOV EAX,9C the messed up function + call: ntdll.ZwQueryInformationThread 7C90D7F0 8D9B 000000BA LEA EBX,DWORD PTR DS:[EBX+BA000000] 7C90D7F6 0003 ADD BYTE PTR DS:[EBX],AL 7C90D7F8 FE ??? ; Unknown command 7C90D7F9 7F FF JG SHORT ntdll.7C90D7FA 7C90D7FB 12C2 ADC AL,DL 7C90D7FD 14 00 ADC AL,0 7C90D7FF 90 NOP ntdll.ZwQueryInformationToken 7C90D800 B8 9C000000 MOV EAX,9C So firstly, anyone know what if anything would lead to OpenGL calls cause an infinite lock,and if there are any ways around it? and what would be creating such a hook in kernal memory ? Update: After some more fiddling, I have discovered a few more kernal hooks, a lot of them are used to nullify data returned by system information calls(such as the remote debugging port), I also managed to find out the what ever is doing this is using madchook.dll(by madshi) to do this, this dll is also injected into every running process(these seem to be some anti debugging code). Also, on the OpenGL side, it seems Direct X is fine/unaffected(I ran one of the DX 9 demo's without problems), so could one of these kernal hooks somehow affect OpenGL?

    Read the article

  • javascript innerHTML without childNodes?

    - by John Doe
    hi all im having a firefox issue where i dont see the wood for the trees using ajax i get html source from a php script this html code contains a tag and within the tbody some more tr/td's now i want to append this tbody plaincode to an existing table. but there is one more condition: the table is part of a form and thus contains checkboxe's and drop down's. if i would use table.innerHTML += content; firefox reloads the table and reset's all elements within it which isnt very userfriendly as id like to have what i have is this // content equals transport.responseText from ajax request function appendToTable(content){ var wrapper = document.createElement('table'); wrapper.innerHTML = content; wrapper.setAttribute('id', 'wrappid'); wrapper.style.display = 'none'; document.body.appendChild(wrapper); // get the parsed element - well it should be wrapper = document.getElementById('wrappid'); // the destination table table = document.getElementById('tableid'); // firebug prints a table element - seems right console.log(wrapper); // firebug prints the content ive inserted - seems right console.log(wrapper.innerHTML); var i = 0; // childNodes is iterated 2 times, both are textnode's // the second one seems to be a simple '\n' for(i=0;i<wrapper.childNodes.length;i++){ // firebug prints 'undefined' - wth!?? console.log(wrapper.childNodes[i].innerHTML); // firebug prints a textnode element - <TextNode textContent=" "> console.log(wrapper.childNodes[i]); table.appendChild(wrapper.childNodes[i]); } // WEIRD: firebug has no problems showing the 'wrappid' table and its contents in the html view - which seems there are the elements i want and not textelements } either this is so trivial that i dont see the problem OR its a corner case and i hope someone here has that much of expirience to give an advice on this - anyone can imagine why i get textnodes and not the finally parsed dom elements i expect? btw: btw i cant give a full example cause i cant write a smaller non working piece of code its one of those bugs that occure in the wild and not in my testset thx all

    Read the article

  • How to exit current activity to homescreen (without using "Home" button)?

    - by steff
    Hi everyone, I am sure this will have been answered but I proved unable to find it. So please excuse my redundancy. What I am trying to do is emulating the "Home" button which takes one back to Android's homescreen. So here is what causes me problems: I have 3 launcher activities. The first one (which is connected to the homescreen icon) is just a (password protected) configuration activity. It will not be used by the user (just admin) One of the other 2 (both accessed via an app widget) is a questionnaire app. I'm allowing to jump back between questions via the Back button or a GUI back button as well. When the questionnaire is finished I sum up the answers given and provide a "Finish" button which should take the user back to the home screen. For the questionnaire app I use a single activity (called ItemActivity) which calls itself (is that recursion as well when using intents?) to jump from one question to another: Questionnaire.serializeToXML(); Intent i = new Intent().setClass(c, ItemActivity.class); if(Questionnaire.instance.getCurrentItemNo() == Questionnaire.instance.getAmountOfItems()) { Questionnaire.instance.setCompleted(true); } else Questionnaire.instance.nextItem(); startActivity(i); The final screen shows something like "Thank you for participating" as well as the formerly described button which should take one back to the homescreen. But I don't really get how to exit the Activity properly. I've e.g. used this.finish(); but this strangely brings up the "Thank you" screen again. So how can I just exit by jumping back to the homescreen?? Sorry for the inconvinience. Regards, Steff

    Read the article

  • Flash CS4 [AS3]: Playing Card Deck Array

    - by Ben
    I am looking to make a card game in Flash CS4 using AS3 and I am stuck on the very first step. I have created graphics for a standard 52 card deck of playing cards and imported them into the library in Flash and then proceeded to convert them all to Movie Clips. I have also used the linkage to make them available in the code. The movie clips and the linkage are named in sequence, as in the Ace of Clubs would be C1, two of Diamonds is called D2, Jack of Spades is S11. (C = Clubs, D = Diamonds, S = Spades, H = Hearts and numbers 1 through 13 are the card values. 1 being Ace, 11 being Jack, 12 being Queen, 13 being King). As far as I know my next step would be to arrange the cards into an array. This is the part that I am having problems with. Can someone please point me in the right direction, what would be the best way to do this. Could you provide me with a bit of sample code as well? I have had a look at few tutorials online but they are all telling me different things, some are incomplete and the rest...well...they're just cr*ppy. Thanks in advance! Ben

    Read the article

  • How to handle dynamic role or username changes in JSF?

    - by roadrunner
    I have a JSF application running on glassfish 2.1 with a EJB 3 backend. For authentication I use a custom realm. The user authenticates using the e-mail-address and password he specified on registration. Everything is working quite well. Now I have two related problems: 1) The user can edit his profile and -- naturally -- he can also change his e-mail-address. Unfortunately when I perform operations based on the current user's identity using ExternalContext.getUserPrincipal().getName(), I will receive the previous e-mail-address the user used on login. At the moment I handle this by forcing the user to reauthenticate after he changed his e-mail-address, but is there another more graceful possibility? 2) Same for user roles. E.g. I have the user roles MEMBER and PREMIUM_MEMBER. A MEMBER may become a PREMIUM_MEMBER during his current session. Unfortunately the role seems to be only determined at login. Is there any possibility, that JSF and EJB recognize the new user role without the need for the user to re-authenticated?

    Read the article

  • git crlf configuration in mixed environment

    - by Jonas Byström
    I'm running a mixed environment, and keep a central, bare repository where I pull and push most of my stuff. This centralized repository runs on Linux, and I check out to Windows XP/7, Mac and Linux. In all repositories I put the following line in my .git/config: [core] autocrlf = true I don't have the flag safecrlf=true anywhere. First time when I modify stuff on my one Windows machine (XP) there is no problem and when I look at the diff, it looks fine. But when I do the same on the other Windows machine (7), all lines are shown as changed but local line endings are \r\n as expected (when checked in a hex editor). The same applies to a MacOSX can. Sometimes I get the feeling that the different systems wrestle on line endings, but I can't be sure (I'm loosing track of all the times I change specific files). I didn't use to have the autocrlf set, but set the flag many months back. Could that be causing my current problems? Do I need to clone everything again to loose some old baggage? Or are there other things that needs configuring too? I tried git checkout -- . about a million times, but with no success.

    Read the article

  • How to Detect in Windows Registry if user has .Net Framework installed?

    - by Sarah Weinberger
    How do I detect in the Windows Registry if a user has .Net Framework installed? I am not looking for a .Net based solution, as the query is from InnoSetup. I know from reading another post here on Stack Overflow that .Net Framework is an inplace upgrade to 4.0. I already know how to check if a user has version 4.0 installed on the system, namely by checking the following: function FindFramework(): Boolean; var bVer4x0: Boolean; bVer4x0Client: Boolean; bVer4x0Full: Boolean; bSuccess: Boolean; iInstalled: Cardinal; begin Result := False; bVer4x0Client := False; bVer4x0Full := False; bVer4x0 := RegKeyExists(HKLM, 'SOFTWARE\Microsoft\.NETFramework\policy\v4.0'); bSuccess := RegQueryDWordValue(HKLM, 'Software\Microsoft\NET Framework Setup\NDP\v4 \Client', 'Install', iInstalled); if (1 = iInstalled) AND (True = bSuccess) then bVer4x0Client := True; bSuccess := RegQueryDWordValue(HKLM, 'Software\Microsoft\NET Framework Setup\NDP\v4 \Full', 'Install', iInstalled); if (1 = iInstalled) AND (True = bSuccess) then bVer4x0Full := True; if (True = bVer4x0Full) then begin Result := True; end; end; I checked the registry and there is no v4.5 folder, which makes sense if .Net Framework 4.5 is an inplace upgrade. Still, the Control Panel Programs and Features includes the listing. I know that probably "issuing dotNetFx45_Full_setup.exe /q" will have no bad effect if installing on a system that already has version 4.5, but I still would like to not install the upgrade if the upgrade already exists, faster and less problems.

    Read the article

  • Game engine deployment strategy for the Android?

    - by Jeremy Bell
    In college, my senior project was to create a simple 2D game engine complete with a scripting language which compiled to bytecode, which was interpreted. For fun, I'd like to port the engine to android. I'm new to android development, so I'm not sure which way to go as far as deploying the engine on the phone. The easiest way I suppose would be to require the engine/interpreter to be bundled with every game that uses it. This solves any versioning issues. There are two problems with this. One: this makes each game app larger and two: I originally released the engine under the LGPL license (unfortunately), but this deployment strategy makes it difficult to conform to the rules of that license, particularly with respect to allowing users to replace the lib easily with another version. So, my other option is to somehow have the engine stand alone as an Activity or service that somehow responds to intents raised by game apps, and somehow give the engine app permissions to read the scripts and other assets to "run" the game. The user could then be able to replace the engine app with a different version (possibly one they made themselves). Is this even possible? What would you recommend? How could I handle it in a secure way?

    Read the article

  • Date Picker Control Not Displaying Proper Date (Access 2003)

    - by JPM
    Hi everyone, I just have a quick question. I am maintaining an app for my summer co-op position, and a new requirement came down today where the user requested to have a date control added to a form to mark the date of when an employee is "laid off". This control is enabled/disabled by a toggle button, and has its control source bound to a field that I added in the database. All the functionality has been added and tested, but.... The issue I am having is that the date picker is on a tab control (the 2nd page) and I am having problems trying to get the control to display the date that is stored in the field I created. I know the control is storing any changes made using it, but since the user asked to move the control over to the 2nd tab (it was on the first), it just shows today's date, not the date entered using the control. To make things a little more strange, if I place the control anywhere except the tab control, it seems to be working fine. I've even placed a textbox on the tab and set its control source to the database field, and it displays just fine. What gives? And I have registered the .ocx with Access, and as I mentioned before, the actual database is storing the data. Just not displaying it. Any ideas as to what I am doing wrong?

    Read the article

  • Comet (long polling) and XmlHttpRequest status

    - by chris_l
    I'm playing around a little bit with raw XmlHttpRequestObjects + Comet Long Polling. (Usually, I'd let GWT or another framework handle of this for me, but I want to learn more about it.) I wrote the following code: function longPoll() { var xhr = createXHR(); // Creates an XmlHttpRequestObject xhr.open('GET', 'LongPollServlet', true); xhr.onreadystatechange = function () { if (xhr.readyState == 4) { if (xhr.status == 200) { ... } if (xhr.status > 0) { longPoll(); } } } xhr.send(null); } ... <body onload="javascript:longPoll()">... I wrapped the longPoll() call in an if statement that checks for status > 0, because I encountered, that when I leave the page (by browsing somewhere else, or by reloading it), one last unnecessary comet call is sent. [And on Firefox, it even causes severe problems when doing a page reload, for some reason I don't fully understand yet.] Question: Is that status check the correct way to handle this problem, or is there a better solution?

    Read the article

  • Trying to draw a dynamic rectangle in SVG

    - by Shaun
    To be more specific, here are the steps I need: onmousedown - set x and y of rect as mouse coordinates onmousemove - using the current x and y mouse coordinates calculate height and width of the rect, set these and append onmouseup - remove the rectangle, and call a function based off some calculations from the rect. Here is what I have but isn't quite working (right now I have it drawing a line to make it simpler): onmousedown: startbox(evt) function startbox(evt) { if(evt.button === 0) { x1 = evt.clientX + div.scrollLeft-5; y1 = evt.clientY + div.scrollTop-30; obj.setAttributeNS(null, "x1", x1); obj.setAttributeNS(null, "y1", y1); Root.setAttributeNS(null, "onmousemove", "updatebox(evt)"); } } onmousemove: updatebox(evt) function updatebox(evt) { if(evt.button === 0) { x2 = evt.clientX + div.scrollLeft-5; y2 = evt.clientY + div.scrollTop-30; Root.appendChild(.obj); w = Math.abs(x2-x1); h = Math.abs(y2-y1); var strokecolor; if(w>20 && h>20) { strokecolor = "green"; validbox = true; } else { strokecolor = "red"; validbox = false; } var Attr={ x2:x2, y2:y2, stroke:strokecolor } assignAttr(obj, Attr); //just loops thru adding multiple attributes } } onmouseup: endbox() function endbox(evt) { if(evt.button===0) { Root.setAttributeNS(null, "onmousemove", ""); Root.removeChild(obj); if(validbox) { //do stuff validbox = !validbox; } } } Some of my problems with this are: Its slow in Chrome making drawing the line/rect feel sluggish. It won't work two times in a row. This is the real problem that I can't fix. Any and all feedback is welcome.

    Read the article

  • linq: SQL performance on high loaded web applications

    - by Alex
    I started working with linq to SQL several weeks ago. I got really tired of working with SQL server directly through the SQL queries (sqldatareader, sqlcommand and all this good stuff).  After hearing about linq to SQL and mvc I quickly moved all my projects to these technologies. I expected linq to SQL work slower but it suprisongly turned out to be pretty fast, primarily because I always forgot to close my connections when using datareaders. Now I don't have to worry about it. But there's one problem that really bothers me. There's one page that's requested thousands of times a day. The system gets data in the beginning, works with it and updates it. Primarily the updates are ++ @ -- (increase and decrease values). I used to do it like this UPDATE table SET value=value+1 WHERE ID=@I'd It worked with no problems obviously. But with linq to SQL the data is taken in the beginning, moved to the class, changed and then saved. Stats.registeredusers++; Db.submitchanges(); Let's say there were 100 000 users. Linq will say "let it be 100 001" instead of "let it be increased by 1". But if there value of users has already been increased (that happens in my site all the time) then linq will be like oops, this value is already 100 001. Whatever I'll throw an exception" You can change this behavior so that it won't throw an exception but it still will not set the value to 100 002. Like I said, it happened with me all the time. The stas value was increased twice a second on average. I simply had to rewrite this chunk of code with classic ado net. So my question is how can you solve the problem with linq

    Read the article

  • Using ZLib unit to compress files vs using ZipForge

    - by user193655
    There are many questions on zipping in Delphi, anyway this is not a duplicate. I am using ZipForge for zip/unzip capability in my application. Currently I use 2 features of ZipForge: 1) zip and unzip (!) 2) password protect the archives Now I am removing the password from all the archives so I need only to zip and unzip files. I zip them just for minimizing bandwith when uploading/downloading files from the server. So my idea is to process all files once for unzipping them (with password) and rezipping them without password. I have nothing against ZipForge, anyway it is an extra component, every time I upgrade to a newest Delphi version I have to wait for the new IDE support and moreover the more components the more problems during the installation. So since what I do is very simple I'd like to replace ZipForge with 2 simple functinos using the ZLib unit. I found (and tested) the functions here on Torry's. What do you think of using Zlib unit? Do you see any potential problem that I would not have with ZipForge? Can you comment on speed?

    Read the article

  • Pushing a local mercurial repository to a remote server or cloning at server from local

    - by Samaursa
    I have a local repository that I have now decided to push to a remote server (for example, I have a host that allows mercurial repositories and I am also trying to push to bitbucket). The repository has a lot of files and is a little more than 200mb. Locally, I am able to clone the repository without problems. Now I have a lot of changes in this repository, and I have wasted a couple of days trying to figure out how to get the remote server to clone my repository. I cannot get hg serve to work outside of the LAN. I have tried everything. So instead, I created a new repository at the remote servers (both at the host and bitbucket) with nothing in it. Now I am pushing the complete repository that I have locally to these remote locations. So far it has been unsuccessful, as the push operation is stuck on searching for changes and does not give me any other useful output. I have let it go for about an hour with no change. Now my questions is, what am I doing wrong as far as hg serve is concerned? I can access it locally but not remotely (through DynDns - I have configured it properly and the router forwards the ports correctly) so that I can get the server to clone the repository the first time after which I will be pushing to it. My second question is, assuming the clone at server does not work (for example, if I was to push my current repository to bitbucket), is creating an empty repository at the server and then pushing a local repository to the new remote repository ok? Is that the source of the searching for changes problem? Any help in this regard would be greatly appreciated.

    Read the article

  • Machine leaning algorithm for data classification.

    - by twk
    Hi all, I'm looking for some guidance about which techniques/algorithms I should research to solve the following problem. I've currently got an algorithm that clusters similar-sounding mp3s using acoustic fingerprinting. In each cluster, I have all the different metadata (song/artist/album) for each file. For that cluster, I'd like to pick the "best" song/artist/album metadata that matches an existing row in my database, or if there is no best match, decide to insert a new row. For a cluster, there is generally some correct metadata, but individual files have many types of problems: Artist/songs are completely misnamed, or just slightly mispelled the artist/song/album is missing, but the rest of the information is there the song is actually a live recording, but only some of the files in the cluster are labeled as such. there may be very little metadata, in some cases just the file name, which might be artist - song.mp3, or artist - album - song.mp3, or another variation A simple voting algorithm works fairly well, but I'd like to have something I can train on a large set of data that might pick up more nuances than what I've got right now. Any links to papers or similar projects would be greatly appreciated. Thanks!

    Read the article

  • Set HttpContext.Current.User from Thread.CurrentPrincipal

    - by Argons
    I have a security manager in my application that works for both windows and web, the process is simple, just takes the user and pwd and authenticates them against a database then sets the Thread.CurrentPrincipal with a custom principal. For windows applications this works fine, but I have problems with web applications. After the process of authentication, when I'm trying to set the Current.User to the custom principal from Thread.CurrentPrincipal this last one contains a GenericPrincipal. Am I doing something wrong? This is my code: Login.aspx protected void btnAuthenticate_Click(object sender, EventArgs e) { Authenticate("user","pwd"); FormsAuthenticationTicket authenticationTicket = new FormsAuthenticationTicket(1, "user", DateTime.Now, DateTime.Now.AddMinutes(30), false, ""); string ticket = FormsAuthentication.Encrypt(authenticationTicket); HttpCookie authenticationCookie = new HttpCookie(FormsAuthentication.FormsCookieName, ticket); Response.Cookies.Add(authenticationCookie); Response.Redirect(FormsAuthentication.GetRedirectUrl("user", false)); } Global.asax (This is where the problem appears) protected void Application_AuthenticateRequest(object sender, EventArgs e) { HttpCookie authCookie = Context.Request.Cookies[FormsAuthentication.FormsCookieName]; if (authCookie == null) return; if (HttpContext.Current.User != null && HttpContext.Current.User.Identity.IsAuthenticated && HttpContext.Current.User.Identity is FormsIdentity) { HttpContext.Current.User = System.Threading.Thread.CurrentPrincipal; //Here the value is GenericPrincipal } Thanks in advance for any help. }

    Read the article

  • handling large arrays with array_diff

    - by bigmac
    I have been trying to compare two arrays. Using array_intersect presents no problems. When using array_diff and arrays with ~5,000 values, it works. When I get to ~10,000 values, the script dies when I get to array_diff. Turning on error_reporting did not produce anything. I tried creating my own array_diff function: function manual_array_diff($arraya, $arrayb) { foreach ($arraya as $keya => $valuea) { if (in_array($valuea, $arrayb)) { unset($arraya[$keya]); } } return $arraya; } source: http://stackoverflow.com/questions/2479963/how-does-array-diff-work I would expect it to be less efficient that than the official array_diff, but it can handle arrays of ~10,000. Unfortunately, both array_diffs fail when I get to ~15,000. I tried the same code on a different machine and it runs fine, so it's not an issue with the code or PHP. There must be some limit set somewhere on that particular server. Any idea how I can get around that limit or alter it or just find out what it is?

    Read the article

  • Virus on site but can't find where

    - by Rob
    WARNING! THIS IS ABOUT A VIRUS ON MY SITE. IT APPEARS IT HAS BEEN THERE FOR SOMETIME AND I'VE HAD NO PROBLEMS. BUT PLEASE BE CAREFUL. READ EVERYTHING I SAY AND SEE IF YOU CAN HELP ME WITHOUT VISITING THE LINK. AVG PICKS UP ON IT AND BLOCKS IT, MCAFEE DOES NOT. Sorry about the warning, obviously i'm not here to get anyone infected or anything like that. Basically I run the website sortitoutsi dot net. Ages ago I got a virus on my computer, they got hold of my FTP passwords and added some lines of javascript to the top of my site. I removed them and believe it was fixed. However i'm using the "Web Developer" extension for Firefox and chose to view all javascript on my page and find there are various links to horrible urls such as: gittigidiyor-com.excite.co.jp.webmasterworld-com.eastmusicdirect.ru:8080/aboutus.org/aboutus.org/google.com/skycn.com/torrents.ru.php and gittigidiyor-com.excite.co.jp.webmasterworld-com.eastmusicdirect.ru:8080/index.php?jl= These terms do not appear anywhere. In the source code, in any of the javascript or the css. I also can't see that there are any rogue images that I don't recognise either. So i've no idea where this javascript is coming from. Can anyone suggest how I can find references to these links and remove them? I can see them both in the Web Developer firefox extension and in the net tab using Firebug. Any help would be greatly appreciated

    Read the article

  • Technical choices in unmarshaling hash-consed data

    - by Pascal Cuoq
    There seems to be quite a bit of folklore knowledge floating about in restricted circles about the pitfalls of hash-consing combined with marshaling-unmarshaling of data. I am looking for citable references to these tidbits. For instance, someone once pointed me to library aterm and mentioned that the authors had clearly thought about this and that the representation on disk was bottom-up (children of a node come before the node itself in the data stream). This is indeed the right way to do things when you need to re-share each node (with a possible identical node already in memory). This re-sharing pass needs to be done bottom-up, so the unmarshaling itself might as well be, too, so that it's possible to do everything in a single pass. I am in the process of describing difficulties encountered in our own context, and the solutions we found. I would appreciate any citable reference to the kind of aforementioned folklore knowledge. Some people obviously have encountered the problems before (the aterm library is only one example). But I didn't find anything in writing. Even the little piece of information I have about aterm is hear-say. I am not worried it's not reliable (you can't make this up), but "personal communication" and "look how it's done in the source code" are considered poor form in citations. I have enough references on hash-consing alone. I am only interested in references where it interferes with other aspects of programming, such as marshaling or distribution.

    Read the article

  • Discussion on SEO best-practices for site development involving php...

    - by Bradley Herman
    Recently in our work, I've started getting some experience with SEO (finally). It's something I've put off for a long time because I've always maintained that SEO is a buzz-word b.s. pseudo-science and more about providing quality, relevant content (assuming proper header tags and the basics are covered). However, sometimes a client doesn't have stellar content yet still demands SEO and high rankings. While it's not how I design sites 100% of the time (as design dictates structure), I typically create a basic template from the design my boss gives me, then I optimize it, and then strip the top and bottom and move those to header.php and footer.php, using the following to bring in the header and footer based on AJAX versus HTML requests: <?php if($_SERVER['HTTP_X_REQUESTED_WITH']==''){ include('includes/header.php'); }?> #content here <?php if($_SERVER['HTTP_X_REQUESTED_WITH']==''){ include('includes/footer.php'); }?> Then, I use jQuery to intercept page requests and I use AJAX to fill in, for example, a #copy div with the new content. This avoids unnecessarily loading all the header and footer info everytime, but still allows users without Java to access pages without any problems. (also to think about, depending on size of content, do the extra http requests added using this method render it more of a server strain versus a single, larger file?) I don't have a really solid understanding of the meta keywords and their SEO significance, but as I recall reading, the keywords, title, and description on a page should match up to the pages content--ie. each page should have slightly different keywords/description while retaining some common ground. What I'm getting at here is trying to foster a discussion on whether my approach is flawed to begin with, if there are things I can do (within reason) that keep the site structure simple but allow for better SEO practices, or if my SEO understandings are wrong. This isn't a question, per say, but hopefully a constructive discussion here that more than just I can learn from. I appreciate any responses and hope to hear from you. Thanks!

    Read the article

  • @PrePersist with entity inheritance

    - by gerry
    I'm having some problems with inheritance and the @PrePersist annotation. My source code looks like the following: _the 'base' class with the annotated updateDates() method: @javax.persistence.Entity @Inheritance(strategy = InheritanceType.TABLE_PER_CLASS) public class Base implements Serializable{ ... @Id @GeneratedValue protected Long id; ... @Column(nullable=false) @Temporal(TemporalType.TIMESTAMP) private Date creationDate; @Column(nullable=false) @Temporal(TemporalType.TIMESTAMP) private Date lastModificationDate; ... public Date getCreationDate() { return creationDate; } public void setCreationDate(Date creationDate) { this.creationDate = creationDate; } public Date getLastModificationDate() { return lastModificationDate; } public void setLastModificationDate(Date lastModificationDate) { this.lastModificationDate = lastModificationDate; } ... @PrePersist protected void updateDates() { if (creationDate == null) { creationDate = new Date(); } lastModificationDate = new Date(); } } _ now the 'Child' class that should inherit all methods "and annotations" from the base class: @javax.persistence.Entity @NamedQueries({ @NamedQuery(name=Sensor.QUERY_FIND_ALL, query="SELECT s FROM Sensor s") }) public class Sensor extends Entity { ... // additional attributes @Column(nullable=false) protected String value; ... // additional getters, setters ... } If I store/persist instances of the Base class to the database, everything works fine. The dates are getting updated. But now, if I want to persist a child instance, the database throws the following exception: MySQLIntegrityConstraintViolationException: Column 'CREATIONDATE' cannot be null So, in my opinion, this is caused because in Child the method "@PrePersist protected void updateDates()" is not called/invoked before persisting the instances to the database. What is wrong with my code?

    Read the article

  • Jquery Cycle Lite plugin unable to see images that have been loaded by an AJAX call

    - by William Macdonald
    Hi, I am having problems using the jquery cycle lite plugin on some images that are added via AJAX. Here is the jquery code: $(function() { resizeWindow(); $(window).bind("resize", resizeWindow); $("#assignment-nav").accordion({ header: "h3", autoHeight: false }); $(".project").click(function() { // get the HTML and load into div $('.image-holder').empty(); var justTheNumber = $(this).attr('id').replace('project-',''); $.get("get_project_images.php", {project_id:justTheNumber}, function(data){ $('.image-holder').append(data); } ); $(".image-holder").cycle({ // Cycle plugin prev: '#prev', next: '#next', timeout: 0, speed: 250 }) }); }); My code works fine in that the IMG tags are loaded and the first image of the slide show is displayed. However the prev/next buttons don't work. When I load the images via static HTML the slideshow prev/next links works fine. (I just copied and pasted the generated HTML.) I understand I need to use something like .bind or .live to make the Cycle plugin 'see' the new images. I have tried everything I can think of but I can't make it work. What am I doing wrong ?

    Read the article

  • JSF - Creating an overlay for popup panels.

    - by Ben
    Hi, I've created an overlay that will popup whenever someone wants to upload a file to the system. The Gui looks like this (when the overlay is up) I have two problems with this: I attached a a4j:support object that, onclick, makes the overlay disappear. The problem with this is that when I click the upload button on the upload component, support catches the click event and closes the overlay with the upload component before I have the chance to finish the operation. I chose two different style classes. One for the overlay and one for the upload panel. But the styling of the overlay takes over the upload component and it becomes transparent as well. The implementation looks something like this: <h:panelgroup layout="block" styleClass="overlayClass"> <rich:fileUpload styleClass="uploadStyleClass"... /> <a4j:support event="onclick" action="#{mrBean.switchOverlayState}" reRender="..."/> </h:panelGroup> The CSS: .overlayClass { Opacity: 0.5; position: fixed; left: 0; right: 0; top: 0; bottom: 0; background: #000; } .uploadStyleClass { opacity: 1.0; ... } Thanks for the help!

    Read the article

  • Pseudo-quicksort time complexity

    - by Ord
    I know that quicksort has O(n log n) average time complexity. A pseudo-quicksort (which is only a quicksort when you look at it from far enough away, with a suitably high level of abstraction) that is often used to demonstrate the conciseness of functional languages is as follows (given in Haskell): quicksort :: Ord a => [a] -> [a] quicksort [] = [] quicksort (p:xs) = quicksort [y | y<-xs, y<p] ++ [p] ++ quicksort [y | y<-xs, y>=p] Okay, so I know this thing has problems. The biggest problem with this is that it does not sort in place, which is normally a big advantage of quicksort. Even if that didn't matter, it would still take longer than a typical quicksort because it has to do two passes of the list when it partitions it, and it does costly append operations to splice it back together afterwards. Further, the choice of the first element as the pivot is not the best choice. But even considering all of that, isn't the average time complexity of this quicksort the same as the standard quicksort? Namely, O(n log n)? Because the appends and the partition still have linear time complexity, even if they are inefficient.

    Read the article

< Previous Page | 502 503 504 505 506 507 508 509 510 511 512 513  | Next Page >