Search Results

Search found 19878 results on 796 pages for 'multiple dispatch'.

Page 778/796 | < Previous Page | 774 775 776 777 778 779 780 781 782 783 784 785  | Next Page >

  • Why can't Java servlet sent out an object ?

    - by Frank
    I use the following method to send out an object from a servlet : public void doGet(HttpServletRequest request,HttpServletResponse response) throws IOException { String Full_URL=request.getRequestURL().append("?"+request.getQueryString()).toString(); String Contact_Id=request.getParameter("Contact_Id"); String Time_Stamp=Get_Date_Format(6),query="select from "+Contact_Info_Entry.class.getName()+" where Contact_Id == '"+Contact_Id+"' order by Contact_Id desc"; PersistenceManager pm=null; try { pm=PMF.get().getPersistenceManager(); // note that this returns a list, there could be multiple, DataStore does not ensure uniqueness for non-primary key fields List<Contact_Info_Entry> results=(List<Contact_Info_Entry>)pm.newQuery(query).execute(); Write_Serialized_XML(response.getOutputStream(),results.get(0)); } catch (Exception e) { Send_Email(Email_From,Email_To,"Check_License_Servlet Error [ "+Time_Stamp+" ]",new Text(e.toString()+"\n"+Get_Stack_Trace(e)),null); } finally { pm.close(); } } /** Writes the object and CLOSES the stream. Uses the persistance delegate registered in this class. * @param os The stream to write to. * @param o The object to be serialized. */ public static void writeXMLObject(OutputStream os,Object o) { // Classloader reference must be set since netBeans uses another class loader to loead the bean wich will fail in some circumstances. ClassLoader oldClassLoader=Thread.currentThread().getContextClassLoader(); Thread.currentThread().setContextClassLoader(Check_License_Servlet.class.getClassLoader()); XMLEncoder encoder=new XMLEncoder(os); encoder.setExceptionListener(new ExceptionListener() { public void exceptionThrown(Exception e) { e.printStackTrace(); }}); encoder.writeObject(o); encoder.flush(); encoder.close(); Thread.currentThread().setContextClassLoader(oldClassLoader); } private static ByteArrayOutputStream writeOutputStream=new ByteArrayOutputStream(16384); /** Writes an object to XML. * @param out The boject out to write to. [ Will not be closed. ] * @param o The object to write. */ public static synchronized void writeAsXML(ObjectOutput out,Object o) throws IOException { writeOutputStream.reset(); writeXMLObject(writeOutputStream,o); byte[] Bt_1=writeOutputStream.toByteArray(); byte[] Bt_2=new Des_Encrypter().encrypt(Bt_1,Key); out.writeInt(Bt_2.length); out.write(Bt_2); out.flush(); out.close(); } public static synchronized void Write_Serialized_XML(OutputStream Output_Stream,Object o) throws IOException { writeAsXML(new ObjectOutputStream(Output_Stream),o); } At the receiving end the code look like this : File_Url="http://"+Site_Url+App_Dir+File_Name; try { Contact_Info_Entry Online_Contact_Entry=(Contact_Info_Entry)Read_Serialized_XML(new URL(File_Url)); } catch (Exception e) { e.printStackTrace(); } private static byte[] readBuf=new byte[16384]; public static synchronized Object readAsXML(ObjectInput in) throws IOException { // Classloader reference must be set since netBeans uses another class loader to load the bean which will fail under some circumstances. ClassLoader oldClassLoader=Thread.currentThread().getContextClassLoader(); Thread.currentThread().setContextClassLoader(Tool_Lib_Simple.class.getClassLoader()); int length=in.readInt(); readBuf=new byte[length]; in.readFully(readBuf,0,length); byte Bt[]=new Des_Encrypter().decrypt(readBuf,Key); XMLDecoder dec=new XMLDecoder(new ByteArrayInputStream(Bt,0,Bt.length)); Object o=dec.readObject(); Thread.currentThread().setContextClassLoader(oldClassLoader); in.close(); return o; } public static synchronized Object Read_Serialized_XML(URL File_Url) throws IOException { return readAsXML(new ObjectInputStream(File_Url.openStream())); } But I can't get the object from the Java app that's on the receiving end, why ? The error messages look like this : java.lang.ClassNotFoundException: PayPal_Monitor.Contact_Info_Entry Continuing ... java.lang.NullPointerException: target should not be null Continuing ... java.lang.NullPointerException: target should not be null Continuing ... java.lang.NullPointerException: target should not be null Continuing ...

    Read the article

  • Error using Session in IIS7

    - by flashnik
    After deployment of my website to IIS I'm getting a following error message when trying to access session: Session state can only be used when enableSessionState is set to true, either in a configuration file or in the Page directive. Please also make sure that System.Web.SessionStateModule or a custom session state module is included in the \\ section in the application configuration. I access it in Page_Load or PreRender events (I tried both versions). With VS Dev Server it works without a problem. I tried both InProc an SessionState storage, 1 and multiple woker processes. I added a enableSessionState = "true" to my webpage explicitly. Here is part of web.config: <system.web> <globalization culture="ru-RU" uiCulture="ru-RU" /> <compilation debug="true" defaultLanguage="c#"> <assemblies> <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> <add assembly="System.Data.DataSetExtensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> <add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Xml.Linq, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> <add assembly="System.Web.Extensions.Design, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Design, Version=2.0.0.0, Culture=neutral, PublicKeyToken=B03F5F7F11D50A3A" /> <add assembly="System.Windows.Forms, Version=2.0.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> </assemblies> </compilation> <pages enableEventValidation="false" enableSessionState="true"> <controls> <add tagPrefix="asp" namespace="System.Web.UI" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add tagPrefix="asp" namespace="System.Web.UI.WebControls" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </controls> </pages> <httpHandlers> <remove verb="*" path="*.asmx" /> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" validate="false" /> </httpHandlers> <httpModules> <add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add name="SearchUrlRewriter" type="Synonymizer.SearchUrlRewriter, Synonymizer, Version=1.0.0.0, Culture=neutral" /> <add name="Session" type="System.Web.SessionStateModule" /> </httpModules> <sessionState cookieless="UseCookies" cookieName="My_SessionId" mode="InProc" stateNetworkTimeout="5" /> <customErrors mode="Off" /> </system.web> What else do I need to do to make it work?? UPD I tried to monitor if IIS accesses aspnet_client folder with ProcMon and didn't get any access.

    Read the article

  • Setting value for autocomplete search field linked to Google Places API

    - by user1653350
    I have a web page where people will be able to enter multiple destinations. When they state they want to enter a new destination, the current field values are stored in arrays. If they choose to go back to a previous destination, the relevant values are reinserted into the form fields. I am using the search field linked to autocomplete as the visible display of the destination. When I attempt to put a value into the linked search field, the value is presented as if it is a placeholder instead of a value. Enter the field and the value is removed by the onFocus() event of the Google Places autocomplete add-in. How can I reinsert the value and have it recognised as a value instead of placeholder. field definition in the form <label for="GoogleDestSrch" class="inputText">Destination: <span id="DestinationDisplay2">1</span> <span class="required"><font size="5"> * </font></span></label> <input id="GoogleDestSrch" type="text" size="50" placeholder="Please enter your destination" /> initialise code for Google Places API listener var input = document.getElementById('GoogleDestSrch'); var autocomplete = new google.maps.places.Autocomplete(input); google.maps.event.addListener(autocomplete, 'place_changed', function() { fillInAddress(); }); attempting to reinsert value into search field when prior destination reloaded form.GoogleDestSrch.value = GoogleDestSrch[index]; Issue With Google Places <script language="JavaScript" type="text/javascript"> function GotoDestination(index) { var domove = true; if (index == 0) { index = lastIndex + 1; } else { if (index == -1) { index = lastIndex - 1; if (index == 0) { index = 1; domove = false; } } } if (domove) { if (index != lastIndex) { var doc = window.document; var pdbutton = doc.getElementById("pdbutton"); var pdbutton1 = doc.getElementById("pdbutton1"); if ((index > lastIndex)) { // move to next destination saveDataF(lastIndex); loadDataF(index); lastIndex = index; } else if (index <= lastIndex) { // move to previous destination saveDataF(lastIndex); loadDataF(index); lastIndex = index; } } } } var input; var autocomplete; // fill in the Google metadata when a destination is selected function fillInAddress() { var strFullValue = ''; var strFullGeoValue = ''; var place = autocomplete.getPlace(); document.getElementById("GoogleType").value = place.types[0]; } function saveDataF(index) { var fieldValue; var blankSearch = "Please enter"; // placeholder text for Google Places fieldValue = document.getElementById("GoogleDestSrch").value; if (fieldValue.indexOf(blankSearch) > -1) { fieldValue = ""; } GoogleDestSrch[index] = fieldValue; } function loadDataF(index) { if ((GoogleDestSrch[index] + "") == "undefined") { document.getElementById("GoogleDestSrch").value = ""; } else { document.getElementById("GoogleDestSrch").value = GoogleDestSrch[index]; } } // -- Destination: 1 * Type of place // input = document.getElementById('GoogleDestSrch'); autocomplete = new google.maps.places.Autocomplete(input); google.maps.event.addListener(autocomplete, 'place_changed', function () { fillInAddress(); }); //]]

    Read the article

  • RegEx to ignore / skip everything in html tags

    - by Scott Sumpter
    Looking for a way to combine two Regular Expressions. One to catch the urls and the other to ensure is skips text within html tags. See sample text below functions. Need to pass a block of news text and format text by wrapping urls and email addresses in html tags so users don't have to. The below code works great until there are already html tags within the text. In that case it doubles the html tags. There are plenty of examples to strip html, but I want to just ignore it since the url is already linkified. Also - if there is an easier was to accomplish this, with or without Regex, please let me know. none of my attempts to combine Regexs have worked. coding in ASP.NET VB but will take any workable example/direction. Thanks! ===== Functions ============= Public Shared Function InsertHyperlinks(ByVal inText As String) As String Dim strBuf As String Dim objMatches As Object Dim iStart, iEnd As Integer strBuf = "" iStart = 1 iEnd = 1 Dim strRegUrlEmail As String = "\b(www|http|\S+@)\S+\b" 'RegEx to find urls and email addresses Dim objRegExp As New Regex(strRegUrlEmail, RegexOptions.IgnoreCase) 'Match URLs and emails Dim MatchList As MatchCollection = objRegExp.Matches(inText) If MatchList.Count <> 0 Then objMatches = objRegExp.Matches(inText) For Each Match In MatchList iEnd = Match.Index strBuf = strBuf & Mid(inText, iStart, iEnd - iStart + 1) If InStr(1, Match.Value, "@") Then strBuf = strBuf & HrefGet(Match.Value, "EMAIL", "_BLANK") Else strBuf = strBuf & HrefGet(Match.Value, "WEB", "_BLANK") End If iStart = iEnd + Match.Length + 1 Next strBuf = strBuf & Mid(inText, iStart) InsertHyperlinks = strBuf Else 'No hyperlinks to replace InsertHyperlinks = inText End If End Function Shared Function HrefGet(ByVal url As String, ByVal urlType As String, ByVal Target As String) As String Dim strBuf As String strBuf = "<a href=""" If UCase(urlType) = "WEB" Then If LCase(Left(url, 3)) = "www" Then strBuf = "<a href=""http://" & url & """ Target=""" & _ Target & """>" & url & "</a>" Else strBuf = "<a href=""" & url & """ Target=""" & _ Target & """>" & url & "</a>" End If ElseIf UCase(urlType) = "EMAIL" Then strBuf = "<a href=""mailto:" & url & """ Target=""" & _ Target & """>" & url & "</a>" End If HrefGet = strBuf End Function ===== Sample Text ============= This would be the inText parameter. Midway through the ride, we see a Skip this too. But sometimes we go here [insert normal www dot link dot com]. If you'd like to join us contact Bill Smith at [email protected]. Thanks! sorry stack overflow won't allow multiple hyperlinks to be added. ===== End Sample Text =============

    Read the article

  • "Attach or Add an entity that is not new...loaded from another DataContext. This is not supported."

    - by sah302
    Similar error as other questions, but not quite the same, I am not trying to attach anything. What I am trying to do is insert a new row into a linking table, specifically UserAccomplishment. Relations are set in LINQ to User and Accomplishment Tables. I have a generic insert function: Public Function insertRow(ByVal entity As ImplementationType) As Boolean If entity IsNot Nothing Then Dim lcfdatacontext As New LCFDataContext() Try lcfdatacontext.GetTable(Of ImplementationType)().InsertOnSubmit(entity) lcfdatacontext.SubmitChanges() lcfdatacontext.Dispose() Return True Catch ex As Exception Return False End Try Else Return False End If End Function If you try and give UserAccomplishment the two appropriate objects this will naturally crap out if either the User or Accomplishment already exist. It only works when both user and accomplishment don't exist. I expected this behavior. What does work is simply giving the userAccomplishment object a user.id and accomplishment.id and populating the rest of the fields. This works but is kind of awkward to use in my app, it would be much easier to simply pass in both objects and have it work out what already exists and what doesn't. Okay so I made the following (please ignore the fact that this is horribly inefficient because I know it is): Public Class UserAccomplishmentDao Inherits EntityDao(Of UserAccomplishment) Public Function insertLinkerObjectRow(ByVal userAccomplishment As UserAccomplishment) Dim insertSuccess As Boolean = False If Not userAccomplishment Is Nothing Then Dim userDao As New UserDao() Dim accomplishmentDao As New AccomplishmentDao() Dim user As New User() Dim accomplishment As New Accomplishment() 'see if either object already exists in db' user = userDao.getOneByValueOfProperty("Id", userAccomplishment.User.Id) accomplishment = accomplishmentDao.getOneByValueOfProperty("Id", userAccomplishment.Accomplishment.Id) If user Is Nothing And accomplishment Is Nothing Then 'neither the user or the accomplishment exist, both are new so insert them both, typical insert' insertSuccess = Me.insertRow(userAccomplishment) ElseIf user Is Nothing And Not accomplishment Is Nothing Then 'user is new, accomplishment is not new, so just insert the user, and the relation in userAccomplishment' Dim userWithExistingAccomplishment As New UserAccomplishment(userAccomplishment.User, userAccomplishment.Accomplishment.Id, userAccomplishment.LastUpdatedBy) insertSuccess = Me.insertRow(userWithExistingAccomplishment) ElseIf Not user Is Nothing And accomplishment Is Nothing Then 'user is not new, accomplishment is new, so just insert the accomplishment, and the relation in userAccomplishment' Dim existingUserWithAccomplishment As New UserAccomplishment(userAccomplishment.UserId, userAccomplishment.Accomplishment, userAccomplishment.LastUpdatedBy) insertSuccess = Me.insertRow(existingUserWithAccomplishment) Else 'both are not new, just add the relation' Dim userAccomplishmentBothExist As New UserAccomplishment(userAccomplishment.User.Id, userAccomplishment.Accomplishment.Id, userAccomplishment.LastUpdatedBy) insertSuccess = Me.insertRow(userAccomplishmentBothExist) End If End If Return insertSuccess End Function End Class Alright, here I basically check if the supplied user and accomplishment already exists in the db, and if so call an appropriate constructor that will leave whatever already exists empty, but supply the rest of the information so the insert can succeed. However, upon trying an insert: Dim result As Boolean = Me.userAccomplishmentDao.insertLinkerObjectRow(userAccomplishment) In which the user already exists, but the accomplishment does not (the 99% typical scenario) I get the error: "An attempt has been made to Attach or Add an entity that is not new, perhaps having been loaded from another DataContext. This is not supported." I have debugged this multiple times now and am not sure why this is occuring, if either User or Accomplishment exist, I am not including it in the final object to try to insert. So nothing appears to be attempted to be added. Even in debug, upon insert, the object was set to empty. So the accomplishment is new and the user is empty. 1) Why is it still saying that and how can I fix it ..using my current structure 2) Pre-emptive 'use repository pattern answers' - I know this way kind of sucks in general and I should be using the repository pattern. However, I can't use that in the current project because I don't have time to refactor that due to my non existence knowledge of it and time constraints. The usage of the app is going to so small that the inefficient use of datacontext's and what have you won't matter so much. I can refactor it once it's up and running, but for now I just need to 'push through' with my current structure. Edit: I also just tested this when having both already exists, and only insert each object's IDs into the table, that works. So I guess I could manually insert whichever object doesn't exist as a single insert, then put the ids only into the linking table, but I still don't know why when one object exists, and I make it empty, it doens't work.

    Read the article

  • iPhone game reading plist file and looping through multi dimensional array.

    - by Fulvio
    I have a question regarding an iPhone game I'm developing. At the moment, below is the code I'm using to currently I loop through my multidimensional array and position bricks accordingly on my scene. Instead of having multiple two dimensional arrays within my code as per the following (gameLevel1). Ideally, I'd like to read from a .plist file within my project and loop through the values in that instead. Please take into account that I'd like to have more than one level within my game (possibly 20) so my .plist file would have to have some sort of separator line item to determine what level I want to render. I was then thinking of having some sort of method that I call and that method would take the level number I'm interested in rendering. e.g. Method? +(void)renderLevel:(NSString levelNumber); e.g. .plist file? #LEVEL_ONE# 0,0,0,0,0,0,0,0,0 0,1,1,1,1,1,1,1,0 0,1,1,1,1,1,1,1,0 0,1,1,1,1,1,1,1,0 0,1,1,1,1,1,1,1,0 0,1,1,1,1,1,1,1,0 0,1,1,1,1,1,1,1,0 0,1,1,1,1,1,1,1,0 0,1,1,1,1,1,1,1,0 0,1,1,1,1,1,1,1,0 0,1,1,1,1,1,1,1,0 0,1,1,1,1,1,1,1,0 0,1,1,1,1,1,1,1,0 0,1,1,1,1,1,1,1,0 0,1,1,1,1,1,1,1,0 0,1,1,1,1,1,1,1,0 0,0,0,0,0,0,0,0,0 #LEVEL_TWO# 1,0,0,0,0,0,0,0,1 1,1,1,1,1,1,1,1,1 1,1,1,1,1,1,1,1,1 1,1,1,1,1,1,1,1,1 1,1,1,1,1,1,1,1,1 0,1,1,1,1,1,1,1,0 1,1,1,1,1,1,1,1,1 0,1,1,1,1,1,1,1,0 1,1,1,1,1,1,1,1,1 0,1,1,1,1,1,1,1,0 1,1,1,1,1,1,1,1,1 0,1,1,1,1,1,1,1,0 1,1,1,1,1,1,1,1,1 1,1,1,1,1,1,1,1,1 1,1,1,1,1,1,1,1,1 1,1,1,1,1,1,1,1,1 1,0,0,0,0,0,0,0,1 Code that I'm currently using: int gameLevel[17][9] = { { 0,0,0,0,0,0,0,0,0 }, { 0,1,1,1,1,1,1,1,0 }, { 0,1,1,1,1,1,1,1,0 }, { 0,1,1,1,1,1,1,1,0 }, { 0,1,1,1,1,1,1,1,0 }, { 0,1,1,1,1,1,1,1,0 }, { 0,1,1,1,1,1,1,1,0 }, { 0,1,1,1,1,1,1,1,0 }, { 0,1,1,1,1,1,1,1,0 }, { 0,1,1,1,1,1,1,1,0 }, { 0,1,1,1,1,1,1,1,0 }, { 0,1,1,1,1,1,1,1,0 }, { 0,1,1,1,1,1,1,1,0 }, { 0,1,1,1,1,1,1,1,0 }, { 0,1,1,1,1,1,1,1,0 }, { 0,1,1,1,1,1,1,1,0 }, { 0,0,0,0,0,0,0,0,0 } }; for (int row=0; row < 17; row++) { for (int col=0; col < 9; col++) { thisBrickValue = gameLevel[row][col]; xOffset = 35 * floor(col); yOffset = 22 * floor(row); switch (thisBrickValue) { case 0: brick = [[CCSprite spriteWithFile:@"block0.png"] autorelease]; break; case 1: brick = [[CCSprite spriteWithFile:@"block1.png"] autorelease]; break; } brick.position = ccp(xOffset, yOffset); [self addChild:brick]; } }

    Read the article

  • Multi-threaded .NET application blocks during file I/O when protected by Themida

    - by Erik Jensen
    As the title says I have a .NET application that is the GUI which uses multiple threads to perform separate file I/O and notice that the threads occasionally block when the application is protected by Themida. One thread is devoted to reading from serial COM port and another thread is devoted to copying files. What I experience is occasionally when the file copy thread encounters a network delay, it will block the other thread that is reading from the serial port. In addition to slow network (which can be transient), I can cause the problem to happen more frequently by making a PathFileExists call to a bad path e.g. PathFileExists("\\\\BadPath\\file.txt"); The COM port reading function will block during the call to ReadFile. This only happens when the application is protected by Themida. I have tried under WinXP, Win7, and Server 2012. In a streamlined test project, if I replace the .NET application with a MFC unmanaged application and still utilize the same threads I see no issue even when protected with Themida. I have contacted Oreans support and here is their response: The way that a .NET application is protected is very different from a native application. To protect a .NET application, we need to hook most of the file access APIs in order to "cheat" the .NET Framework that the application is protected. I guess that those special hooks (on CreateFile, ReadFile...) are delaying a bit the execution in your application and the problem appears. We did a test making those hooks as light as possible (with minimum code on them) but the problem still appeared in your application. The rest of software protectors that we tried (like Enigma, Molebox...) also use a similar hooking approach as it's the only way to make the .NET packed file to work. If those hooks are not present, the .NET Framework will abort execution as it will see that the original file was tampered (due to all Microsoft checks on .NET files) Those hooks are not present in a native application, that's why it should be working fine on your native application. Oreans support tried other software protectors such as Enigma Protector, Engima VirtualBox, and Molebox and all exhibit the exact same problem. What I have found as a work around is to separate out the file copy logic (where the file exists call is being made) to be performed in a completely separate process. I have experimented with converting the thread functions from unmanaged C++ to VB.NET equivalents (PathFileExists - System.IO.File.Exists and CreateFile/ReadFile - System.IO.Ports.SerialPort.Open/Read) and still see the same serial port read blocked when the file check or copy call is delayed. I have also tried setting the ReadFile to work asynchronously but that had no effect. I believe I am dealing with some low-level windows layer that no matter the language it exhibits a block on a shared resource -- and only when the application is executing under a single .NET process protected by Themida which evidently installs some hooks to allow .NET execution. At this time converting the entire application away from .NET is not an option. Nor is separating out the file copy logic to a separate task. I am wondering if anyone else has more knowledge of how a file operation can block another thread reading from a system port. I have included here example applications that show the problem: https://db.tt/cNMYfEIg - VB.NET https://db.tt/Y2lnTqw7 - MFC They are Visual Studio 2010 solutions. When running the themida protected exe, you can see when the FileThread counter pauses (executing the File.Exists call) while the ReadThread counter also pauses. When running non-protected visual studio output exe, the ReadThread counter does not pause which is how we expect it to function. Thanks!

    Read the article

  • Need .Net method to compute a Google Pagerank request checksum.

    - by Steve K
    The company I work for is currently developing a SEO tool which needs to include a domain or url Pagerank. It is possible to retrieve such data directly from Google by sending a request to the url called by the Google ToolBar. On of the parameters send to that url is a checksum of the domain whose pagerank is being requested. I have found multiple .Net methods for calculating that check sum; however, every one randomly returns corrupt values every so often. I can only handle errors to a certain point before my final data set becomes useless. I know that there are countless tools out there, from browser plugins to desktop applications, that can process page rank, so it can't be impossible. My question, then, is two fold: 1) Any anyone heard of the problem I am having? (specifically in .Net) If so, how can it (or has it) be resolved? 2) Is there a better source for retrieving Pagerank data? Below is the Url and checksum code I have been using. "http://toolbarqueries.google.com/search?client=navclient-auto&ie=UTF-8&oe=UTF-8&features=Rank:&q=info:" & strUrl & "ch=" & strCheckSum where: strUrl = the url being queried strCheckSum = CheckHash(GetHash(url)) (see code below) Any help would be greatly appreciated. ''' <summary> ''' Returns a hash-string from the site's URL ''' </summary> ''' <param name="_SiteURL">full URL as indexed by Google</param> ''' <returns>HASH for site as a string</returns> Private Shared Function GetHash(ByVal _SiteURL As String) As String Try Dim _Check1 As Long = StrToNum(_SiteURL, 5381, 33) Dim _Check2 As Long = StrToNum(_SiteURL, 0, 65599) _Check1 >>= 2 _Check1 = ((_Check1 >> 4) And 67108800) Or (_Check1 And 63) _Check1 = ((_Check1 >> 4) And 4193280) Or (_Check1 And 1023) _Check1 = ((_Check1 >> 4) And 245760) Or (_Check1 And 16383) Dim T1 As Long = ((((_Check1 And 960) << 4) Or (_Check1 And 60)) << 2) Or (_Check2 And 3855) Dim T2 As Long = ((((_Check1 And 4294950912) << 4) Or (_Check1 And 15360)) << 10) Or (_Check2 And 252641280) Return Convert.ToString(T1 Or T2) Catch Return "0" End Try End Function ''' <summary> ''' Checks the HASH-string returned and adds check numbers as necessary ''' </summary> ''' <param name="_HashNum">generated HASH-string</param> ''' <returns>modified HASH-string</returns> Private Shared Function CheckHash(ByVal _HashNum As String) As String Try Dim _CheckByte As Long = 0 Dim _Flag As Long = 0 Dim _tempI As Long = Convert.ToInt64(_HashNum) If _tempI < 0 Then _tempI = _tempI * (-1) End If Dim _Hash As String = _tempI.ToString() Dim _Length As Integer = _Hash.Length For x As Integer = _Length - 1 To 0 Step -1 Dim _quick As Char = _Hash(x) Dim _Re As Long = Convert.ToInt64(_quick.ToString()) If 1 = (_Flag Mod 2) Then _Re += _Re _Re = CLng(((_Re \ 10) + (_Re Mod 10))) End If _CheckByte += _Re _Flag += 1 Next _CheckByte = _CheckByte Mod 10 If 0 <> _CheckByte Then _CheckByte = 10 - _CheckByte If 1 = (_Flag Mod 2) Then If 1 = (_CheckByte Mod 2) Then _CheckByte >>= 1 End If End If End If If _Hash.Length = 9 Then _CheckByte += 5 End If Return "7" + _CheckByte.ToString() + _Hash Catch Return "0" End Try End Function ''' <summary> ''' Converts the string (site URL) into numbers for the HASH ''' </summary> ''' <param name="_str">Site URL as passed by GetHash()</param> ''' <param name="_Chk">Necessary passed value</param> ''' <param name="_Magic">Necessary passed value</param> ''' <returns>Long Integer manipulation of string passed</returns> Private Shared Function StrToNum(ByVal _str As String, ByVal _Chk As Long, ByVal _Magic As Long) As Long Try Dim _Int64Unit As Long = Convert.ToInt64(Math.Pow(2, 32)) Dim _StrLen As Integer = _str.Length For x As Integer = 0 To _StrLen - 1 _Chk *= _Magic If _Chk >= _Int64Unit Then _Chk = (_Chk - (_Int64Unit * Convert.ToInt64(_Chk \ _Int64Unit))) _Chk = IIf((_Chk < -2147483648), (_Chk + _Int64Unit), _Chk) End If _Chk += CLng(Asc(_str(x))) Next Catch End Try Return _Chk End Function

    Read the article

  • MIME "Content-Type" folding and parameter question regarding RFCs?

    - by BastiBense
    Hello, I'm trying to implement a basic MIME parser for the multipart/related in C++/Qt. So far I've been writing some basic parser code for headers, and I'm reading the RFCs to get an idea how to do everything as close to the specification as possible. Unfortunately there is a part in the RFC that confuses me a bit: From RFC882 Section 3.1.1: Each header field can be viewed as a single, logical line of ASCII characters, comprising a field-name and a field-body. For convenience, the field-body portion of this conceptual entity can be split into a multiple-line representation; this is called "folding". The general rule is that wherever there may be linear-white-space (NOT simply LWSP-chars), a CRLF immediately followed by AT LEAST one LWSP-char may instead be inserted. Thus, the single line Alright, so I simply parse a header field and if a CRLF follows with linear whitespace, I simply concat those in a useful manner to result in a single header line. Let's proceed... From RFC2045 Section 5.1: In the Augmented BNF notation of RFC 822, a Content-Type header field value is defined as follows: content := "Content-Type" ":" type "/" subtype *(";" parameter) ; Matching of media type and subtype ; is ALWAYS case-insensitive. [...] parameter := attribute "=" value attribute := token ; Matching of attributes ; is ALWAYS case-insensitive. value := token / quoted-string token := 1*<any (US-ASCII) CHAR except SPACE, CTLs, or tspecials> Okay. So it seems if you want to specify a Content-Type header with parameters, simple do it like this: Content-Type: multipart/related; foo=bar; something=else ... and a folded version of the same header would look like this: Content-Type: multipart/related; foo=bar; something=else Correct? Good. As I kept reading the RFCs, I came across the following in RFC2387 Section 5.1 (Examples): Content-Type: Multipart/Related; boundary=example-1 start="<[email protected]>"; type="Application/X-FixedRecord" start-info="-o ps" --example-1 Content-Type: Application/X-FixedRecord Content-ID: <[email protected]> [data] --example-1 Content-Type: Application/octet-stream Content-Description: The fixed length records Content-Transfer-Encoding: base64 Content-ID: <[email protected]> [data] --example-1-- Hmm, this is odd. Do you see the Content-Type header? It has a number of parameters, but not all have a ";" as parameter delimiter. Maybe I just didn't read the RFCs correctly, but if my parser works strictly like the specification defines, the type and start-info parameters would result in a single string or worse, a parser error. Guys, what's your thought on this? Just a typo in the RFCs? Or did I miss something? Thanks!

    Read the article

  • C# Asynchronous Network IO and OutOfMemoryException

    - by The.Anti.9
    I'm working on a client/server application in C#, and I need to get Asynchronous sockets working so I can handle multiple connections at once. Technically it works the way it is now, but I get an OutOfMemoryException after about 3 minutes of running. MSDN says to use a WaitHandler to do WaitOne() after the socket.BeginAccept(), but it doesn't actually let me do that. When I try to do that in the code it says WaitHandler is an abstract class or interface, and I can't instantiate it. I thought maybe Id try a static reference, but it doesnt have teh WaitOne() method, just WaitAll() and WaitAny(). The main problem is that in the docs it doesn't give a full code snippet, so you can't actually see what their "wait handler" is coming from. its just a variable called allDone, which also has a Reset() method in the snippet, which a waithandler doesn't have. After digging around in their docs, I found some related thing about an AutoResetEvent in the Threading namespace. It has a WaitOne() and a Reset() method. So I tried that around the while(true) { ... socket.BeginAccept( ... ); ... }. Unfortunately this makes it only take one connection at a time. So I'm not really sure where to go. Here's my code: class ServerRunner { private Byte[] data = new Byte[2048]; private int size = 2048; private Socket server; static AutoResetEvent allDone = new AutoResetEvent(false); public ServerRunner() { server = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); IPEndPoint iep = new IPEndPoint(IPAddress.Any, 33333); server.Bind(iep); Console.WriteLine("Server initialized.."); } public void Run() { server.Listen(100); Console.WriteLine("Listening..."); while (true) { //allDone.Reset(); server.BeginAccept(new AsyncCallback(AcceptCon), server); //allDone.WaitOne(); } } void AcceptCon(IAsyncResult iar) { Socket oldserver = (Socket)iar.AsyncState; Socket client = oldserver.EndAccept(iar); Console.WriteLine(client.RemoteEndPoint.ToString() + " connected"); byte[] message = Encoding.ASCII.GetBytes("Welcome"); client.BeginSend(message, 0, message.Length, SocketFlags.None, new AsyncCallback(SendData), client); } void SendData(IAsyncResult iar) { Socket client = (Socket)iar.AsyncState; int sent = client.EndSend(iar); client.BeginReceive(data, 0, size, SocketFlags.None, new AsyncCallback(ReceiveData), client); } void ReceiveData(IAsyncResult iar) { Socket client = (Socket)iar.AsyncState; int recv = client.EndReceive(iar); if (recv == 0) { client.Close(); server.BeginAccept(new AsyncCallback(AcceptCon), server); return; } string receivedData = Encoding.ASCII.GetString(data, 0, recv); //process received data here byte[] message2 = Encoding.ASCII.GetBytes("reply"); client.BeginSend(message2, 0, message2.Length, SocketFlags.None, new AsyncCallback(SendData), client); } }

    Read the article

  • How to design a C / C++ library to be usable in many client languages?

    - by Brian Schimmel
    I'm planning to code a library that should be usable by a large number of people in on a wide spectrum of platforms. What do I have to consider to design it right? To make this questions more specific, there are four "subquestions" at the end. Choice of language Considering all the known requirements and details, I concluded that a library written in C or C++ was the way to go. I think the primary usage of my library will be in programs written in C, C++ and Java SE, but I can also think of reasons to use it from Java ME, PHP, .NET, Objective C, Python, Ruby, bash scrips, etc... Maybe I cannot target all of them, but if it's possible, I'll do it. Requirements It would be to much to describe the full purpose of my library here, but there are some aspects that might be important to this question: The library itself will start out small, but definitely will grow to enormous complexity, so it is not an option to maintain several versions in parallel. Most of the complexity will be hidden inside the library, though The library will construct an object graph that is used heavily inside. Some clients of the library will only be interested in specific attributes of specific objects, while other clients must traverse the object graph in some way Clients may change the objects, and the library must be notified thereof The library may change the objects, and the client must be notified thereof, if it already has a handle to that object The library must be multi-threaded, because it will maintain network connections to several other hosts While some requests to the library may be handled synchronously, many of them will take too long and must be processed in the background, and notify the client on success (or failure) Of course, answers are welcome no matter if they address my specific requirements, or if they answer the question in a general way that matters to a wider audience! My assumptions, so far So here are some of my assumptions and conclusions, which I gathered in the past months: Internally I can use whatever I want, e.g. C++ with operator overloading, multiple inheritance, template meta programming... as long as there is a portable compiler which handles it (think of gcc / g++) But my interface has to be a clean C interface that does not involve name mangling Also, I think my interface should only consist of functions, with basic/primitive data types (and maybe pointers) passed as parameters and return values If I use pointers, I think I should only use them to pass them back to the library, not to operate directly on the referenced memory For usage in a C++ application, I might also offer an object oriented interface (Which is also prone to name mangling, so the App must either use the same compiler, or include the library in source form) Is this also true for usage in C# ? For usage in Java SE / Java EE, the Java native interface (JNI) applies. I have some basic knowledge about it, but I should definitely double check it. Not all client languages handle multithreading well, so there should be a single thread talking to the client For usage on Java ME, there is no such thing as JNI, but I might go with Nested VM For usage in Bash scripts, there must be an executable with a command line interface For the other client languages, I have no idea For most client languages, it would be nice to have kind of an adapter interface written in that language. I think there are tools to automatically generate this for Java and some others For object oriented languages, it might be possible to create an object oriented adapter which hides the fact that the interface to the library is function based - but I don't know if its worth the effort Possible subquestions is this possible with manageable effort, or is it just too much portability? are there any good books / websites about this kind of design criteria? are any of my assumptions wrong? which open source libraries are worth studying to learn from their design / interface / souce? meta: This question is rather long, do you see any way to split it into several smaller ones? (If you reply to this, do it as a comment, not as an answer)

    Read the article

  • Injecting Entitymanager via XML and not annnotations.

    - by user355543
    What I am trying to do is inject through XML almost the same way that is done through A @PersistenceContext annotation. I am in need of this because of the fact I have different entity managers I need to inject into the same DAO. The databases mirror one another and I would rather have 1 base class and for instances of that base class then create multiple classes just so I can use the @PersistenceContext annotation. Here is my example. This is what I am doing now and it works. public class ItemDaoImpl { protected EntityManager entityManager; public List<Item> getItems() { Query query = entityManager.createQuery("select i from Item i"); List<Item> s = (List<Item>)query.getResultList(); return s; } public void setEntityManger(EntityManager entityManager) { this.entityManager = entityManager; } } @Repository(value = "itemDaoStore2") public class ItemDaoImplStore2 extends ItemDaoImpl { @PersistenceContext(unitName = "persistence_unit_2") public void setEntityManger(EntityManager entityManager) { this.entityManager = entityManager; } } @Repository(value = "itemDaoStore1") public class ItemDaoImplStore1 extends ItemDaoImpl { @PersistenceContext(unitName = "persistence_unit_1") public void setEntityManger(EntityManager entityManager) { this.entityManager = entityManager; } } TransactionManagers, EntityManagers are defined below... <!-- Registers Spring's standard post-processors for annotation-based configuration like @Repository --> <context:annotation-config /> <!-- For @Transactional annotations --> <tx:annotation-driven transaction-manager="transactionManager1" /> <tx:annotation-driven transaction-manager="transactionManager2" /> <!-- This makes Spring perform @PersistenceContext/@PersitenceUnit injection: --> <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor"/> <!-- Drives transactions using local JPA APIs --> <bean id="transactionManager1" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="entityManagerFactory" ref="entityManagerFactory1" /> </bean> <bean id="transactionManager2" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="entityManagerFactory" ref="entityManagerFactory2" /> </bean> <bean id="entityManagerFactory1" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitName" value="persistence_unit_1"/> ... </bean> <bean id="entityManagerFactory2" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitName" value="persistence_unit_2"/> ... </bean> What I want to do is to NOT create classes ItemDaoImplStore2 or ItemDaoImplStore1. I want to have these as instances of ItemDaoImpl via xml instead. I do not know how to inject the entitymanager properly though. I want to simulate annotating this as a 'Repository' annotation, and I also want to be able to specify what entityManager to inject by the persistence unit name. I want something similar to the below using XML instead. <!-- Somehow annotate this instance as a @Repository annotation --> <bean id="itemDaoStore1" class="ItemDaoImpl"> <!-- Does not work since it is a LocalContainerEntityManagerFactoryBean--> <!-- Also I would perfer to do it the same way PersistenceContext works and only provide the persistence unit name. I would like to be able to specify persistence_unit_1--> <property name="entityManager" ref="entityManagerFactory1"/> </bean> <!-- Somehow annotate this instance as a @Repository annotation --> <bean id="itemDaoStore2" class="ItemDaoImpl"> <!-- Does not work since it is a LocalContainerEntityManagerFactoryBean--> <!-- Also I would perfer to do it the same way PersistenceContext works and only provide the persistence unit name. I would like to be able to specify persistence_unit_2--> <property name="entityManager" ref="entityManagerFactory2"/> </bean>

    Read the article

  • How can I get SQL Server transactions to use record-level locks?

    - by Joe White
    We have an application that was originally written as a desktop app, lo these many years ago. It starts a transaction whenever you open an edit screen, and commits if you click OK, or rolls back if you click Cancel. This worked okay for a desktop app, but now we're trying to move to ADO.NET and SQL Server, and the long-running transactions are problematic. I found that we'll have a problem when multiple users are all trying to edit (different subsets of) the same table at the same time. In our old database, each user's transaction would acquire record-level locks to every record they modified during their transaction; since different users were editing different records, everyone gets their own locks and everything works. But in SQL Server, as soon as one user edits a record inside a transaction, SQL Server appears to get a lock on the entire table. When a second user tries to edit a different record in the same table, the second user's app simply locks up, because the SqlConnection blocks until the first user either commits or rolls back. I'm aware that long-running transactions are bad, and I know that the best solution would be to change these screens so that they no longer keep transactions open for a long time. But since that would mean some invasive and risky changes, I also want to research whether there's a way to get this code up and running as-is, just so I know what my options are. How can I get two different users' transactions in SQL Server to lock individual records instead of the entire table? Here's a quick-and-dirty console app that illustrates the issue. I've created a database called "test1", with one table called "Values" that just has ID (int) and Value (nvarchar) columns. If you run the app, it asks for an ID to modify, starts a transaction, modifies that record, and then leaves the transaction open until you press ENTER. I want to be able to start the program and tell it to update ID 1; let it get its transaction and modify the record; start a second copy of the program and tell it to update ID 2; have it able to update (and commit) while the first app's transaction is still open. Currently it freezes at step 4, until I go back to the first copy of the app and close it or press ENTER so it commits. The call to command.ExecuteNonQuery blocks until the first connection is closed. public static void Main() { Console.Write("ID to update: "); var id = int.Parse(Console.ReadLine()); Console.WriteLine("Starting transaction"); using (var scope = new TransactionScope()) using (var connection = new SqlConnection(@"Data Source=localhost\sqlexpress;Initial Catalog=test1;Integrated Security=True")) { connection.Open(); var command = connection.CreateCommand(); command.CommandText = "UPDATE [Values] SET Value = 'Value' WHERE ID = " + id; Console.WriteLine("Updating record"); command.ExecuteNonQuery(); Console.Write("Press ENTER to end transaction: "); Console.ReadLine(); scope.Complete(); } } Here are some things I've already tried, with no change in behavior: Changing the transaction isolation level to "read uncommitted" Specifying a "WITH (ROWLOCK)" on the UPDATE statement

    Read the article

  • XDocument + IEnumerable is causing out of memory exception in System.Xml.Linq.dll

    - by Manatherin
    Basically I have a program which, when it starts loads a list of files (as FileInfo) and for each file in the list it loads a XML document (as XDocument). The program then reads data out of it into a container class (storing as IEnumerables), at which point the XDocument goes out of scope. The program then exports the data from the container class to a database. After the export the container class goes out of scope, however, the garbage collector isn't clearing up the container class which, because its storing as IEnumerable, seems to lead to the XDocument staying in memory (Not sure if this is the reason but the task manager is showing the memory from the XDocument isn't being freed). As the program is looping through multiple files eventually the program is throwing a out of memory exception. To mitigate this ive ended up using System.GC.Collect(); to force the garbage collector to run after the container goes out of scope. this is working but my questions are: Is this the right thing to do? (Forcing the garbage collector to run seems a bit odd) Is there a better way to make sure the XDocument memory is being disposed? Could there be a different reason, other than the IEnumerable, that the document memory isnt being freed? Thanks. Edit: Code Samples: Container Class: public IEnumerable<CustomClassOne> CustomClassOne { get; set; } public IEnumerable<CustomClassTwo> CustomClassTwo { get; set; } public IEnumerable<CustomClassThree> CustomClassThree { get; set; } ... public IEnumerable<CustomClassNine> CustomClassNine { get; set; }</code></pre> Custom Class: public long VariableOne { get; set; } public int VariableTwo { get; set; } public DateTime VariableThree { get; set; } ... Anyway that's the basic structures really. The Custom Classes are populated through the container class from the XML document. The filled structures themselves use very little memory. A container class is filled from one XML document, goes out of scope, the next document is then loaded e.g. public static void ExportAll(IEnumerable<FileInfo> files) { foreach (FileInfo file in files) { ExportFile(file); //Temporary to clear memory System.GC.Collect(); } } private static void ExportFile(FileInfo file) { ContainerClass containerClass = Reader.ReadXMLDocument(file); ExportContainerClass(containerClass); //Export simply dumps the data from the container class into a database //Container Class (and any passed container classes) goes out of scope at end of export } public static ContainerClass ReadXMLDocument(FileInfo fileToRead) { XDocument document = GetXDocument(fileToRead); var containerClass = new ContainerClass(); //ForEach customClass in containerClass //Read all data for customClass from XDocument return containerClass; } Forgot to mention this bit (not sure if its relevent), the files can be compressed as .gz so I have the GetXDocument() method to load it private static XDocument GetXDocument(FileInfo fileToRead) { XDocument document; using (FileStream fileStream = new FileStream(fileToRead.FullName, FileMode.Open, FileAccess.Read, FileShare.Read)) { if (String.Compare(fileToRead.Extension, ".gz", true) == 0) { using (GZipStream zipStream = new GZipStream(fileStream, CompressionMode.Decompress)) { document = XDocument.Load(zipStream); } } else { document = XDocument.Load(fileStream); } return document; } } Hope this is enough information. Thanks Edit: The System.GC.Collect() is not working 100% of the time, sometimes the program seems to retain the XDocument, anyone have any idea why this might be?

    Read the article

  • Problems uploading different file types in codeigniter

    - by Drew
    Below is my script that i'm using to upload different files. All the solutions I've found deal only with multiple image uploads. I am totally stumped for a solution on this. Can someone tell me what it is i'm supposed to be doing to upload different files in the same form? Thanks function do_upload() { $config['upload_path'] = './uploads/nav'; $config['allowed_types'] = 'gif|jpg|png'; $config['max_size'] = '2000'; $this->load->library('upload', $config); if ( ! $this->upload->do_upload('userfile')) { $error = array('error' => $this->upload->display_errors()); return $error; } else { $soundfig['upload_path'] = './uploads/nav'; $soundfig['allowed_types'] = 'mp3|wav'; $this->load->library('upload', $soundfig); if ( ! $this->upload->do_upload('soundfile')) { $error = array('error' => $this->upload->display_errors()); return $error; } else { $data = $this->upload->data('userfile'); $sound = $this->upload->data('soundfile'); $full_path = 'uploads/nav/' . $data['file_name']; $sound_path = 'uploads/nav/' . $sound['file_name']; if($this->input->post('active') == '1'){ $active = '1'; }else{ $active = '0'; } $spam = array( 'image_url' => $full_path, 'sound' => $sound_path, 'active' => $active, 'url' => $this->input->post('url') ); $id = $this->input->post('id'); $this->db->where('id', $id); $this->db->update('NavItemData', $spam); return true; } } } Here is my form: <?php echo form_open_multipart('upload/do_upload');?> <?php if(isset($buttons)) : foreach($buttons as $row) : ?> <h2><?php echo $row->name; ?></h2> <input type="file" name="userfile" size="20" /><br /> <input type="file" name="soundfile" size="20" /> <input type="hidden" name="oldfile" value="<?php echo $row->image_url; ?>" /> <input type="hidden" name="id" value="<?php echo $row->id; ?>" /> <br /><br /> <label>Url: </label><input type="text" name="url" value="<?php echo $row->url; ?>" /><br /> <input type="checkbox" name="active" value="1" <?php if($row->active == '1') { echo 'checked'; } ?> /><br /><br /> <input type="submit" value="submit" /> </form> <?php endforeach; ?> <?php endif; ?>

    Read the article

  • PHP: table structure

    - by A3efan
    I'm developing a website that has some audio courses, each course can have multiple lessons. I want to display each course in its own table with its different lessons. This is my sql statement: Table: courses id, title Table: lessons id, cid (course id), title, date, file $sql = "SELECT lessons.*, courses.title AS course FROM lessons INNER JOIN courses ON courses.id = lessons.cid GROUP BY lessons.id ORDER BY lessons.id" ; Can someone help me with the PHP code? This is the I code I have written: mysql_select_db($database_config, $config); mysql_query("set names utf8"); $sql = "SELECT lessons.*, courses.title AS course FROM lessons INNER JOIN courses ON courses.id = lessons.cid GROUP BY lessons.id ORDER BY lessons.id" ; $result = mysql_query($sql) or die(mysql_error()); while ($row = mysql_fetch_assoc($result)) { echo "<p><span class='heading1'>" . $row['course'] . "</span> </p> "; echo "<p class='datum'>Posted onder <a href='*'>*</a>, latest update on " . strftime("%A %d %B %Y %H:%M", strtotime($row['date'])); } echo "</p>"; echo "<class id='text'>"; echo "<p>...</p>"; echo "<table border: none cellpadding='1' cellspacing='1'>"; echo "<tr>"; echo "<th>Nr.</th>"; echo "<th width='450'>Lesso</th>"; echo "<th>Date</th>"; echo "<th>Download</th>"; echo "</tr>"; echo "<tr>"; echo "<td>" . $row['nr'] . "</td>"; echo "<td>" . $row['title'] . "</td>"; echo "<td>" . strftime("%d/%m/%Y", strtotime($row['date'])) . "</td>"; echo "<td><a href='audio/" . rawurlencode($row['file']) . "'>MP3</a></td>"; echo "</tr>"; echo "</table>"; echo "<br>"; } ?>

    Read the article

  • Cannot get focus on new opened tab with selenium IDE

    - by Goueg83460
    I'm trying to create some web test with selenium IDE. But I have one problem when I click on a javascript link it opened a new tab. I need perform some check on this new tab but I can't get he focus that is still in main page. I tried several things that I'ad search on google without succeed to do it works. I hope that someone can help me. Thanks in advance. Update: So I tried several things and I tink I'm on a good way. I can get windows names with : StoreAllWindowNames names echo names=${name} I have something like: , 987dfg4545sdfgsd It seems that value before "," is the NULL so the default page and the other value is the name of my page. But I'm not able to open it with a selectWindow. Does someone know how should I do it ?? Thanks in advance. More info about my selenium tests: <tr> <td>setSpeed</td> <td>1000</td> <td></td> </tr> <tr> <td>selectWindow</td> <td>null</td> <td></td> </tr> <tr> <td>click</td> <td>link=Show Tree...</td> <td></td> </tr> <tr> <td>storeAllWindowNames</td> <td>names</td> <td>array</td> </tr> <tr> <td>echo</td> <td>${names}</td> <td></td> </tr> <tr> <td>waitForPopUp</td> <td>${names}</td> <td>30000</td> </tr> <tr> <td>selectWindow</td> <td>name=${names}</td> <td></td> </tr> <tr> <td>clickAndWait</td> <td>link=Search</td> <td></td> </tr> Results: * [info] Executing: |setSpeed | 1000 | | * [info] Executing: |selectWindow | null | | * [info] Executing: |click | link=Show Tree... | | * [info] Executing: |storeAllWindowNames | names | array | * [info] Executing: |echo | ${names} | | * [info] echo: ,bdae1e119a367a54 * [info] Executing: |waitForPopUp | ${names} | 30000 | * [error] Timed out after 30000ms * [info] Executing: |selectWindow | name=${names} | | * [error] Window does not exist. If this looks like a Selenium bug, make sure to read http://seleniumhq.org/docs/04_selenese_commands.html#alerts-popups-and-multiple-windows for potential workarounds. Where bdae1e119a367a54 is the dynamic value that I want to get. I found a mach that someone done but it does not works for me it return null http://old.nabble.com/How-can-I-access-the-second,-third..-element-of-a-stored-array--td9393201.html

    Read the article

  • can you simlify and generalize this useful jQuery function?

    - by user199368
    Hi, I'm doing an eshop with goods displayed as "tiles" in grid as usual. I just want to use various sizes of tiles and make sure (via jQuery) there are no free spaces. In basic situation, I have a 960px wrapper and want to use 240x180px (class .grid_4) tiles and 480x360px (class .grid_8) tiles. See image (imagine no margins/paddings there): Problems without jQuery: - when the CMS provides the big tile as 6th, there would be a free space under the 5th one - when the CMS provides the big tile as 7th, there would be a free space under 5th and 6th - when the CMS provides the big tile as 8th, it would shift to next line, leaving position no.8 free My solution so far looks like this: $(".grid_8").each(function(){ //console.log("BIG on position "+($(this).index()+1)+" which is "+(($(this).index()+1)%2?"ODD":"EVEN")); switch (($(this).index()+1)%4) { case 1: // nothing needed //console.log("case 1"); break; case 2: //need to shift one position and wrap into 240px div //console.log("case 2"); $(this).insertAfter($(this).next()); //swaps this with next $(this).prevAll(":nth(0), :nth(1)").wrapAll("<div class=\"grid_4\" />"); break; case 3: //need to shift two positions and wrap into 480px div //console.log("case 3"); $(this).prevAll(":nth(0), :nth(1)").wrapAll("<div class=\"grid_4\" />"); //wraps previous two - forcing them into column $(this).nextAll(":nth(0), :nth(1)").wrapAll("<div class=\"grid_4\" />"); //wraps next two - forcing them into column $(this).insertAfter($(this).next()); //moves behind the second column break; case 0: //need to shift one position //console.log("case 4"); $(this).insertAfter($(this).next()); //console.log("shifted to next line"); break; } }); It should be obvious from the comments how it works - generally always makes sure that the big tile is on odd position (count of preceding small tiles is even) by shifting one position back if needed. Also small tiles to the left from the big one need to be wrapped in another div so that they appear in column rather than row. Now finally the questions: how to generalize the function so that I can use even more tile dimensions like 720x360 (3x2), 480x540 (2x3), etc.? is there a way to simplify the function? I need to make sure that big tile counts as a multiple of small tiles when checking the actual position. Because using index() on the tile on position 12 (last tile in 3rd row) would now return 7 (position 8) because tiles on positions 5 and 9 are wrapped together in one culumn and the big tile is also just a single div, but spans 2x2 positions. any clean way to ensure this? Thank you very much for any hints. Feel free to reuse the code, I think it can be useful. Josef

    Read the article

  • Is there a way to delay compilation of a stored procedure's execution plan?

    - by Ian Henry
    (At first glance this may look like a duplicate of http://stackoverflow.com/questions/421275 or http://stackoverflow.com/questions/414336, but my actual question is a bit different) Alright, this one's had me stumped for a few hours. My example here is ridiculously abstracted, so I doubt it will be possible to recreate locally, but it provides context for my question (Also, I'm running SQL Server 2005). I have a stored procedure with basically two steps, constructing a temp table, populating it with very few rows, and then querying a very large table joining against that temp table. It has multiple parameters, but the most relevant is a datetime "@MinDate." Essentially: create table #smallTable (ID int) insert into #smallTable select (a very small number of rows from some other table) select * from aGiantTable inner join #smallTable on #smallTable.ID = aGiantTable.ID inner join anotherTable on anotherTable.GiantID = aGiantTable.ID where aGiantTable.SomeDateField > @MinDate If I just execute this as a normal query, by declaring @MinDate as a local variable and running that, it produces an optimal execution plan that executes very quickly (first joins on #smallTable and then only considers a very small subset of rows from aGiantTable while doing other operations). It seems to realize that #smallTable is tiny, so it would be efficient to start with it. This is good. However, if I make that a stored procedure with @MinDate as a parameter, it produces a completely inefficient execution plan. (I am recompiling it each time, so it's not a bad cached plan...at least, I sure hope it's not) But here's where it gets weird. If I change the proc to the following: declare @LocalMinDate datetime set @LocalMinDate = @MinDate --where @MinDate is still a parameter create table #smallTable (ID int) insert into #smallTable select (a very small number of rows from some other table) select * from aGiantTable inner join #smallTable on #smallTable.ID = aGiantTable.ID inner join anotherTable on anotherTable.GiantID = aGiantTable.ID where aGiantTable.SomeDateField > @LocalMinDate Then it gives me the efficient plan! So my theory is this: when executing as a plain query (not as a stored procedure), it waits to construct the execution plan for the expensive query until the last minute, so the query optimizer knows that #smallTable is small and uses that information to give the efficient plan. But when executing as a stored procedure, it creates the entire execution plan at once, thus it can't use this bit of information to optimize the plan. But why does using the locally declared variables change this? Why does that delay the creation of the execution plan? Is that actually what's happening? If so, is there a way to force delayed compilation (if that indeed is what's going on here) even when not using local variables in this way? More generally, does anyone have sources on when the execution plan is created for each step of a stored procedure? Googling hasn't provided any helpful information, but I don't think I'm looking for the right thing. Or is my theory just completely unfounded? Edit: Since posting, I've learned of parameter sniffing, and I assume this is what's causing the execution plan to compile prematurely (unless stored procedures indeed compile all at once), so my question remains -- can you force the delay? Or disable the sniffing entirely? The question is academic, since I can force a more efficient plan by replacing the select * from aGiantTable with select * from (select * from aGiantTable where ID in (select ID from #smallTable)) as aGiantTable Or just sucking it up and masking the parameters, but still, this inconsistency has me pretty curious.

    Read the article

  • What is the fastest way to Initialize a multi-dimensional array to non-default values in .NET?

    - by AMissico
    How do I initialize a multi-dimensional array of a primitive type as fast as possible? I am stuck with using multi-dimensional arrays. My problem is performance. The following routine initializes a 100x100 array in approx. 500 ticks. Removing the int.MaxValue initialization results in approx. 180 ticks just for the looping. Approximately 100 ticks to create the array without looping and without initializing to int.MaxValue. Routines similiar to this are called a few hundred-thousand to several million times during a "run". The array size will not change during a run and arrays are created one-at-a-time, used, then discarded, and a new array created. A "run" which may last from one minute (using 10x10 arrays) to forty-five minutes (100x100). The application creates arrays of int, bool, and struct. There can be multiple "runs" executing at same time, but are not because performance degrades terribly. I am using 100x100 as a base-line. I am open to suggestions on how to optimize this non-default initialization of an array. One idea I had is to use a smaller primitive type when available. For instance, using byte instead of int, saves 100 ticks. I would be happy with this, but I am hoping that I don't have to change the primitive data type. public int[,] CreateArray(Size size) { int[,] array = new int[size.Width, size.Height]; for (int x = 0; x < size.Width; x++) { for (int y = 0; y < size.Height; y++) { array[x, y] = int.MaxValue; } } return array; } Down to 450 ticks with the following: public int[,] CreateArray1(Size size) { int iX = size.Width; int iY = size.Height; int[,] array = new int[iX, iY]; for (int x = 0; x < iX; x++) { for (int y = 0; y < iY; y++) { array[x, y] = int.MaxValue; } } return array; } Down to approximately 165 ticks after a one-time initialization of 2800 ticks. (See my answer below.) If I can get stackalloc to work with multi-dimensional arrays, I should be able to get the same performance without having to intialize the private static array. private static bool _arrayInitialized5; private static int[,] _array5; public static int[,] CreateArray5(Size size) { if (!_arrayInitialized5) { int iX = size.Width; int iY = size.Height; _array5 = new int[iX, iY]; for (int x = 0; x < iX; x++) { for (int y = 0; y < iY; y++) { _array5[x, y] = int.MaxValue; } } _arrayInitialized5 = true; } return (int[,])_array5.Clone(); } Down to approximately 165 ticks without using the "clone technique" above. (See my answer below.) I am sure I can get the ticks lower, if I can just figure out the return of CreateArray9. public unsafe static int[,] CreateArray8(Size size) { int iX = size.Width; int iY = size.Height; int[,] array = new int[iX, iY]; fixed (int* pfixed = array) { int count = array.Length; for (int* p = pfixed; count-- > 0; p++) *p = int.MaxValue; } return array; }

    Read the article

  • Symfony2: validate an object that is not an entity

    - by Marronsuisse
    I am using CraueFormFlowBundle to have a multiple page form, and am trying to do some validation on some of the fields but can't figure out how to do this. The object that needs to be validated isn't an Entity, which is causing me trouble. I tried adding a collectionConstraint in the getDefaultOption function of my form type class, but this doesn't work as I get the "Expected argument of type array or Traversable and ArrayAccess" error. I tried with annotations in my object class, but they don't seem to be taken into account. Are annotations taken into account if the class isn't an entity? (i set enable_annotations to true) Anyway, what is the proper way to do this? Basically, I just want to validate that "age" is an integer... class PoemDataCollectorFormType extends AbstractType { public function buildForm(FormBuilder $builder, array $options) { switch ($options['flowStep']) { case 6: $builder->add('msgCategory', 'hidden', array( )); $builder->add('msgFIB','text', array( 'required' => false, )); $builder->add('age', 'integer', array( 'required' => false, )); break; } } public function getDefaultOptions(array $options) { $options = parent::getDefaultOptions($options); $options['flowStep'] = 1; $options['data_class'] = 'YOP\YourOwnPoetBundle\PoemBuilder\PoemDataCollector'; $options['intention'] = 'my_secret_key'; return $options; } } EDIT: add code, handle validation with annotations As Cyprian, I was pretty sure that using annotations should work, however it doesn't... Here is how I try: In my Controller: public function collectPoemDataAction() { $collector = $this->get('yop.poem.datacollector'); $flow = $this->get('yop.form.flow.poemDataCollector'); $flow->bind($collector); $form = $flow->createForm($collector); if ($flow->isValid($form)) { .... } } In my PoemDataCollector class, which is my data class (service yop.poem.datacollector): class PoemDataCollector { /** * @Assert\Type(type="integer", message="Age should be a number") */ private $age; } EDIT2: Here is the services implementation: The data class (PoemDataCollector) seems to be linked to the flow class and not to the form.. Is that why there is no validation? <service id="yop.poem.datacollector" class="YOP\YourOwnPoetBundle\PoemBuilder\PoemDataCollector"> </service> <service id="yop.form.poemDataCollector" class="YOP\YourOwnPoetBundle\Form\Type\PoemDataCollectorFormType"> <tag name="form.type" alias="poemDataCollector" /> </service> <service id="yop.form.flow.poemDataCollector" class="YOP\YourOwnPoetBundle\Form\PoemDataCollectorFlow" parent="craue.form.flow" scope="request"> <call method="setFormType"> <argument type="service" id="yop.form.poemDataCollector" /> </call> </service> How can I do the validation while respecting the craueFormFlowBundle guidelines? The guidelines state: Validation groups To validate the form data class a step-based validation group is passed to the form type. By default, if getName() of the form type returns registerUser, such a group is named flow_registerUser_step1 for the first step. Where should I state my constraint to use those validation groups..? I tried: YOP\YourOwnPoetBundle\PoemBuilder\Form\Type\PoemDataCollectorFormType: properties: name: - MinLength: { limit: 5, message: "Your name must have at least {{ limit }} characters.", groups: [flow_poemDataCollector_step1] } sex: - Type: type: integer message: Please input a number groups: [flow_poemDataCollector_step6] But it is not taken into acount.

    Read the article

  • Advice on Factory Method

    - by heath
    Using php 5.2, I'm trying to use a factory to return a service to the controller. My request uri would be of the format www.mydomain.com/service/method/param1/param2/etc. My controller would then call a service factory using the token sent in the uri. From what I've seen, there are two main routes I could go with my factory. Single method: class ServiceFactory { public static function getInstance($token) { switch($token) { case 'location': return new StaticPageTemplateService('location'); break; case 'product': return new DynamicPageTemplateService('product'); break; case 'user' return new UserService(); break; default: return new StaticPageTemplateService($token); } } } or multiple methods: class ServiceFactory { public static function getLocationService() { return new StaticPageTemplateService('location'); } public static function getProductService() { return new DynamicPageTemplateService('product'); } public static function getUserService() { return new UserService(); } public static function getDefaultService($token) { return new StaticPageTemplateService($token); } } So, given this, I will have a handful of generic services in which I will pass that token (for example, StaticPageTemplateService and DynamicPageTemplateService) that will probably implement another factory method just like this to grab templates, domain objects, etc. And some that will be specific services (for example, UserService) which will be 1:1 to that token and not reused. So, this seems to be an ok approach (please give suggestions if it is not) for a small amount of services. But what about when, over time and my site grows, I end up with 100s of possibilities. This no longer seems like a good approach. Am I just way off to begin with or is there another design pattern that would be a better fit? Thanks. UPDATE: @JSprang - the token is actually sent in the uri like mydomain.com/location would want a service specific to loction and mydomain.com/news would want a service specific to news. Now, for a lot of these, the service will be generic. For instance, a lot of pages will call a StaticTemplatePageService in which the token is passed in to the service. That service in turn will grab the "location" template or "links" template and just spit it back out. Some will need DynamicTemplatePageService in which the token gets passed in, like "news" and that service will grab a NewsDomainObject, determine how to present it and spit that back out. Others, like "user" will be specific to a UserService in which it will have methods like Login, Logout, etc. So basically, the token will be used to determine which service is needed AND if it is generic service, that token will be passed to that service. Maybe token isn't the correct terminology but I hope you get the purpose. I wanted to use the factory so I can easily swap out which Service I need in case my needs change. I just worry that after the site grows larger (both pages and functionality) that the factory will become rather bloated. But I'm starting to feel like I just can't get away from storing the mappings in an array (like Stephen's solution). That just doesn't feel OOP to me and I was hoping to find something more elegant.

    Read the article

  • Efficient representation of Hierarchies in Hibernate.

    - by Alison G
    I'm having some trouble representing an object hierarchy in Hibernate. I've searched around, and haven't managed to find any examples doing this or similar - you have my apologies if this is a common question. I have two types which I'd like to persist using Hibernate: Groups and Items. * Groups are identified uniquely by a combination of their name and their parent. * The groups are arranged in a number of trees, such that every Group has zero or one parent Group. * Each Item can be a member of zero or more Groups. Ideally, I'd like a bi-directional relationship allowing me to get: * all Groups that an Item is a member of * all Items that are a member of a particular Group or its descendants. I also need to be able to traverse the Group tree from the top in order to display it on the UI. The basic object structure would ideally look like this: class Group { ... /** @return all items in this group and its descendants */ Set<Item> getAllItems() { ... } /** @return all direct children of this group */ Set<Group> getChildren() { ... } ... } class Item { ... /** @return all groups that this Item is a direct member of */ Set<Group> getGroups() { ... } ... } Originally, I had just made a simple bi-directional many-to-many relationship between Items and Groups, such that fetching all items in a group hierarchy required recursion down the tree, and fetching groups for an Item was a simple getter, i.e.: class Group { ... private Set<Item> items; private Set<Group> children; ... /** @return all items in this group and its descendants */ Set<Item> getAllItems() { Set<Item> allItems = new HashSet<Item>(); allItems.addAll(this.items); for(Group child : this.getChildren()) { allItems.addAll(child.getAllItems()); } return allItems; } /** @return all direct children of this group */ Set<Group> getChildren() { return this.children; } ... } class Item { ... private Set<Group> groups; /** @return all groups that this Item is a direct member of */ Set<Group> getGroups() { return this.groups; } ... } However, this resulted in multiple database requests to fetch the Items in a Group with many descendants, or for retrieving the entire Group tree to display in the UI. This seems very inefficient, especially with deeper, larger group trees. Is there a better or standard way of representing this relationship in Hibernate? Am I doing anything obviously wrong or stupid? My only other thought so far was this: Replace the group's id, parent and name fields with a unique "path" String which specifies the whole ancestry of a group, e.g.: /rootGroup /rootGroup/aChild /rootGroup/aChild/aGrandChild The join table between Groups and Items would then contain group_path and item_id. This immediately solves the two issues I was suffering previously: 1. The entire group hierarchy can be fetched from the database in a single query and reconstructed in-memory. 2. To retrieve all Items in a group or its descendants, we can select from group_item where group_path='N' or group_path like 'N/%' However, this seems to defeat the point of using Hibernate. All thoughts welcome!

    Read the article

  • Android: MediaPlayer gapless or seamless Video Playing

    - by John Wang
    I can play the videos fine back to back by implementing the OnCompletionListener to set the data source to a different file. No problems there. I call reset() and prepare() just fine. What I haven't been able to figure out, is how to get rid of the 1-2 second gap screen flicker between the data source change and the new video starting. The gap shows a black screen, and I haven't found any way to get around it. I've tried setting the background of the parent view to an image, but it manages to bypass that. Even if the SurfaceView is transparent (which it is by default.) I've also tried to have the multiple video files played at the same time, and switching mediaplayer's display when one ends and the other is supposed to start. The last thing I tried, was to have a second view in the background that I show temporarily while the video is "preparing" and removing it when the video is ready to start. That also wasn't very seamless. Is there any way to get rid of that gap. Running a video in a loop works wonderfully and does exactly what I want with the exception that it's looking through the same video instead of playing a different one that I pick. main.xml <?xml version="1.0" encoding="utf-8"?> <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:background="@drawable/background" android:layout_height="fill_parent"> <SurfaceView android:id="@+id/surface" android:layout_width="fill_parent" android:layout_height="fill_parent" android:layout_gravity="center"> </SurfaceView> </FrameLayout> Player.java public class Player extends Activity implements OnCompletionListener, MediaPlayer.OnPreparedListener, SurfaceHolder.Callback { private MediaPlayer player; private SurfaceView surface; private SurfaceHolder holder; public void onCreate(Bundle b) { super.onCreate(b); setContentView(R.layout.main); surface = (SurfaceView)findViewById(R.id.surface); holder = surface.getHolder(); holder.addCallback(this); holder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); } public void onCompletion(MediaPlayer arg0) { File clip = new File(Environment.getExternalStorageDirectory(),"file2.mp4"); playVideo(clip.getAbsolutePath()); } public void onPrepared(MediaPlayer mediaplayer) { holder.setFixedSize(player.getVideoWidth(), player.getVideoHeight()); player.start(); } private void playVideo(String url) { try { File clip = new File(Environment.getExternalStorageDirectory(),"file1.mp4"); if (player == null) { player = new MediaPlayer(); player.setScreenOnWhilePlaying(true); } else { player.stop(); player.reset(); } player.setDataSource(url); player.setDisplay(holder); player.setOnPreparedListener(this); player.prepare(); player.setOnCompletionListener(this); } catch (Throwable t) { Log.e("ERROR", "Exception Error", t); } }

    Read the article

  • [BEGINNER] Javascript Pop Ups

    - by user566312
    Hey All, My boss has asked for a page that will not change to have two timed pop ups load. I have found code and edited it to what I had thought it should do, but it is only loading the last onLoad event. I am a designer and I have helped with making webpages, but Javascript is so far outside of what I can understand. I have already learned how to use the single pop up and spent a whiiile learning the timeouts, but I cannot seem to get it to work with multiple popup functions. If you have a moment, would you take a look? Thank you :) h <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>H's Page 1</title> <SCRIPT LANGUAGE="JavaScript"> <!-- Original: Ronnie T. Moore, Editor --> <!-- Web Site: The JavaScript Source --> <!-- This script and many more are available free online at --> <!-- The JavaScript Source!! http://javascript.internet.com --> <!-- Begin closetime = 3; // Close window after __ number of seconds? // 0 = do not close, anything else = number of seconds function Start1(URL, WIDTH, HEIGHT) { windowprops = "left=50,top=50,width=" + WIDTH + ",height=" + HEIGHT; preview = window.open(URL, "preview", windowprops); if (closetime) setTimeout("preview.close();", closetime*1000); } function doPopup1() { url = "http://www.google.com"; width = 1680; // width of window in pixels height = 1050; // height of window in pixels delay = 10; // time in seconds before popup opens timer = setTimeout("Start1(url, width, height)", delay*1000); } closetime = 3; // Close window after __ number of seconds? function Start2(URL, WIDTH, HEIGHT) { windowprops = "left=50,top=50,width=" + WIDTH + ",height=" + HEIGHT; preview = window.open(URL, "preview", windowprops); if (closetime) setTimeout("preview.close();", closetime*1000); } function doPopup2() { url = "http://www.yahoo.com"; width = 1680; // width of window in pixels height = 1050; // height of window in pixels delay = 5; // time in seconds before popup opens timer = setTimeout("Start2(url, width, height)", delay*1000); } // End --> </script> <!-- STEP TWO: Insert the onLoad event handler into your BODY tag --> <!-- Script Size: 1.27 KB --> </head> <body OnLoad="doPopup1(); doPopup2();"> <p>My page text.</p> <p>My page text.</p> <p>My page text.</p> <p>My page text.</p> </body> </html>

    Read the article

< Previous Page | 774 775 776 777 778 779 780 781 782 783 784 785  | Next Page >