Search Results

Search found 26059 results on 1043 pages for 'spring test'.

Page 444/1043 | < Previous Page | 440 441 442 443 444 445 446 447 448 449 450 451  | Next Page >

  • OSMDroid simple example required

    - by Bex
    Hi! I am trying to create an app that uses offline maps and custom tiles. For this I have decided to use OSMDroid and have included the jar within my project. I will create my custom tiles using MOBAC. I have been directed to these examples: http://code.google.com/p/osmdroid/source/browse/#svn%2Ftrunk%2FOpenStreetMapViewer%2Fsrc%2Forg%2Fosmdroid%2Fsamples but I am struggling to follow them as I am new to both java and android. I have created a class file called test (which I have created following an example!): public class test extends Activity { /** Called when the activity is first created. */ protected static final String PROVIDER_NAME = LocationManager.GPS_PROVIDER; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); MapView map = (MapView) findViewById(R.id.map); map.setTileSource(TileSourceFactory.MAPQUESTOSM); map.setBuiltInZoomControls(true); map.setMultiTouchControls(true); map.getController().setZoom(16); map.getController().setCenter(new GeoPoint(30266000, -97739000)); } } with a layout file: <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent"> <org.osmdroid.views.MapView android:id="@+id/map" android:layout_width="fill_parent" android:layout_height="fill_parent" tilesource="MapquestOSM" android:layout_weight="1" /> </LinearLayout> When I run this I see no map, just an empty grid. I think this is due to my tilesource but I'm not sure what I need to change it to. Can anyone help? Bex

    Read the article

  • How to serve tiff WMS imagery through GeoServer

    - by mikem419
    Ok so I am new to the GeoServer/database world. I am a student intern and I have been given the task of setting up a WMS using GeoServer. I have never done any database work before this so bare with me if my questions leave out important information. I am using GeoServer 2.0.1 in standalone mode (downloaded using Jetty) with PostgreSQL 8.4 installed. I went through nyc_roads and nyc_buildings install demo in the GeoServer documentation but I still do not understand how I should go about serving up some test images. I noticed that the nyc_roads setup included a .sql file that was responsible for setting up the nyc_buildings database. I do not know how/where this file was generated. Our test images are .tiff and .jpeg. I have successfully been able to do a WMS call on the local GeoServer machine, and have opened the included demo imagery. I now wish to add these .tiff and .jpeg images to GeoServer and access them through WMS. I have tried copying the images to the GeoServer data directory,adding a new data store, and layers, but I always receive an error regarding the "input stream." Again I am very sorry if I am leaving out allot of vital information, this is as much as I know. thanks!

    Read the article

  • Multiple sendto() using UDP socket

    - by ereOn
    Hi, I have a network software which uses UDP to communicate with other instances of the same program. For different reasons, I must use UDP here. I recently had problems sending huge ammounts of data over UDP and had to implement a fragmentation system to split my messages into small data chunks. So far, it worked well but I now encounter an issue when I have to send a lot of data chunks. I have the following algorithm: Split message into small data chunks (around 1500 bytes) Iterate over the data chunks list and for each, send it using sendto() However, when I send a lot of data chunks, the receiver only gets the first 6 messages. Sometimes it misses the sixth and receives the seventh. It depends. Anyway, sendto() always indicates success. This always happen when I test my software over a loopback interface (127.0.0.1) but never over my LAN network. If I add something like std::cout << "test" << std::endl; between the sendto() then every frame is received. I am aware that UDP allows packet loss and that my frames might be loss for a lot of reasons and I suppose it has to do with the rate I am sending the data chunks at. What would be the right approach here ? Implementing some acknowledgement mechanism (just like TCP) seems overkill. Adding some arbitrary waiting time between the sendto() is ugly and will probably decrease performance. Increasing (if possible) the receiver UDP internal buffer ? I don't even know if this is possible. Something else ? I really need your advices here. Thank very much.

    Read the article

  • .NET Working with Locking and Threads

    - by aherrick
    Work on this small test application to learn threading/locking. I have the following code, I would think that the line should only write to console once. However it doesn't seem to be working as expected. Any thoughts on why? What I'm trying to do is add this Lot object to a List, then if any other threads try and hit that list, it would block. Am i completely misusing lock here? class Program { static void Main(string[] args) { int threadCount = 10; //spin up x number of test threads Thread[] threads = new Thread[threadCount]; Work w = new Work(); for (int i = 0; i < threadCount; i++) { threads[i] = new Thread(new ThreadStart(w.DoWork)); } for (int i = 0; i < threadCount; i++) { threads[i].Start(); } // don't let the console close Console.ReadLine(); } } public class Work { List<Lot> lots = new List<Lot>(); private static readonly object thisLock = new object(); public void DoWork() { Lot lot = new Lot() { LotID = 1, LotNumber = "100" }; LockLot(lot); } private void LockLot(Lot lot) { // i would think that "Lot has been added" should only print once? lock (thisLock) { if(!lots.Contains(lot)) { lots.Add(lot); Console.WriteLine("Lot has been added"); } } } }

    Read the article

  • Workaround for GNU Make 3.80 eval bug

    - by bengineerd
    I'm trying to create a generic build template for my Makefiles, kind of like they discuss in the eval documentation. I've run into a known bug with GNU Make 3.80. When $(eval) evaluates a line that is over 193 characters, Make crashes with a "Virtual Memory Exhausted" error. The code I have that causes the issue looks like this. SRC_DIR = ./src/ PROG_NAME = test define PROGRAM_template $(1)_SRC_DIR = $$(SRC_DIR)$(1)/ $(1)_SRC_FILES = $$(wildcard $$($(1)_SRC_DIR)*.c) $(1)_OBJ_FILES = $$($(1)_SRC_FILES):.c=.o) $$($(1)_OBJ_FILES) : $$($(1)_SRC_FILES) # This is the problem line endef $(eval $(call PROGRAM_template,$(PROG_NAME))) When I run this Makefile, I get gmake: *** virtual memory exhausted. Stop. The expected output is that all .c files in ./src/test/ get compiled into .o files (via an implicit rule). The problem is that $$($(1)_SRC_FILES) and $$($(1)_OBJ_FILES) are together over 193 characters long (if there are enough source files). I have tried running the make file on a directory where there is only 2 .c files, and it works fine. It's only when there are many .c files in the SRC directory that I get the error. I know that GNU Make 3.81 fixes this bug. Unfortunately I do not have the authority or ability to install the newer version on the system I'm working on. I'm stuck with 3.80. So, is there some workaround? Maybe split $$($(1)_SRC_FILES) up and declare each dependency individually within the eval?

    Read the article

  • Error Ant Build/deploy to websphere 7.0

    - by adisembiring
    Hi I'm trying to build/deploy war to websphere process server 7.0. and I run on windows environment. I use http://illegalargumentexception.blogspot.com/2008/08/ant-automated-deployment-to-websphere.html as my reference. and http://illegalargumentexception.googlecode.com/svn/trunk/code/java/WebSphereAntFiles/ as my sample code to deployed. this is my buil.properies is ? #build properties mywebappear=D:/data/code/WebSphereAntFiles/scripts/test/mywebappEAR.ear #WAS6 install directory was_home=C:/IBM/WID7_WTE/runtimes/bi_v7 #server name (see cell/node/server; e.g. "server1") was_server=server1 #user + password; for use when security is enabled was_user=admin was_password=admin #stops scripts on problem was_failonerror=true #virtual host was_virtualhost=default_host #Absolute path to EAR file #was_ear=fooEAR.ear #Name of the enterprise application #was_appname=fooEAR this is my console while I trying to build with ws_ant.bat [wsDefaultBindings] mywebapp.war [wsDefaultBindings] <virtual-host> --> default_host [wsDefaultBindings] [wsDefaultBindings] ------------------------ [wsDefaultBindings] Saving EAR File to directory [wsDefaultBindings] Saved EAR File to directory Successfully test_wsStartServer: WAS_wsStartServer: depCheck: depCheck: [startServer] ADMU0116I: Tool information is being logged in file [startServer] C:\IBM\WID7_WTE\runtimes\bi_v7\profiles\qwps\logs\server1\startServer.log [startServer] ADMU0128I: Starting tool with the qwps profile [startServer] ADMU3100I: Reading configuration for server: server1 [startServer] ADMU3028I: Conflict detected on port 8880. Likely causes: a) An instance of [startServer] the server server1 is already running b) some other process is [startServer] using port 8880 [startServer] ADMU3027E: An instance of the server may already be running: server1 [startServer] ADMU0111E: Program exiting with error: [startServer] com.ibm.websphere.management.exception.AdminException: ADMU3027E: An [startServer] instance of the server may already be running: server1 [startServer] ADMU1211I: To obtain a full trace of the failure, use the -trace option. [startServer] ADMU0211I: Error details may be seen in the file: [startServer] C:/IBM/WID7_WTE/runtimes/bi_v7/profiles/qwps\logs\server1\startServer.log BUILD FAILED D:\data\code\WebSphereAntFiles\scripts\test\build.xml:68: The following error occurred while executing this line: D:\data\code\WebSphereAntFiles\scripts\was\wsStartServer.xml:49: Java returned: -1

    Read the article

  • XML parse node value as string

    - by bharathi
    My xml file is look like this . I want to get the value node text content as like this . <property regex=".*" xpath=".*"> <value> 127.0.0.1 </value> <property regex=".*" xpath=".*"> <value> val <![CDATA[ <Valve className="org.tomcat.AccessLogValve" exclude="PASSWORD,pwd,pWord,ticket" enabled="true" serviceName="zohocrm" logDir="../logs" fileName="access" format="URI,&quot;PARAM&quot;,&quot;REFERRER&quot;,TIME_TAKEN,BYTES_OUT,STATUS,TIMESTAMP,METHOD,SESSION_ID,REMOTE_IP,&quot;INTERNAL_IP&quot;,&quot;USER_AGENT&quot;,PROTOCOL,SERVER_NAME,SERVER_PORT,BYTES_IN,ZUID,TICKET_DIGEST,THREAD_ID,REQ_ID"/> ]]> test </value> </property> I want to get text as order they specified in a file . Here is my java code . Document doc = parseDocument("properties.xml"); NodeList properties = doc.getElementsByTagName("property"); for( int i = 0 , len = properties.getLength() ; i < len ; i++) { Element property = (Element)properties.item(i); //How can i proceed further . } Output Expected : Node 1 : 127.0.0.1 Node 2 : val <Valve className="org.tomcat.AccessLogValve" exclude="PASSWORD,pwd,pWord,ticket" enabled="true" serviceName="zohocrm" logDir="../logs" fileName="access" format="URI,&quot;PARAM&quot;,&quot;REFERRER&quot;,TIME_TAKEN,BYTES_OUT,STATUS,TIMESTAMP,METHOD,SESSION_ID,REMOTE_IP,&quot;INTERNAL_IP&quot;,&quot;USER_AGENT&quot;,PROTOCOL,SERVER_NAME,SERVER_PORT,BYTES_IN,ZUID,TICKET_DIGEST,THREAD_ID,REQ_ID"/> test Please suggest your views .

    Read the article

  • Loading a RSA private key from memory using libxmlsec

    - by ereOn
    Hello, I'm currently using libxmlsec into my C++ software and I try to load a RSA private key from memory. To do this, I searched trough the API and found this function. It takes binary data, a size, a format string and several PEM-callback related parameters. When I call the function, it just stucks, uses 100% of the CPU time and never returns. Quite annoying, because I have no way of finding out what is wrong. Here is my code: d_xmlsec_dsig_context->signKey = xmlSecCryptoAppKeyLoadMemory( reinterpret_cast<const xmlSecByte*>(data), static_cast<xmlSecSize>(datalen), xmlSecKeyDataFormatBinary, NULL, NULL, NULL ); data is a const char* pointing to the raw bytes of my RSA key (using i2d_RSAPrivateKey(), from OpenSSL) and datalen the size of data. My test private key doesn't have a passphrase so I decided not to use the callbacks for the time being. Has someone already done something similar ? Do you guys see anything that I could change/test to get forward on this problem ? I just discovered the library on yesterday, so I might miss something obvious here; I just can't see it. Thank you very much for your help.

    Read the article

  • .NET Process.Kill() in a safe way

    - by Orborde
    I'm controlling a creaky old FORTRAN simulator from a VB.NET GUI, using redirected I/O to communicate with the simulator executable. The GUI pops up a "status" window with a progress bar, estimated time, and a "STOP" button (Button_Stop). Now, I want the Button_Stop to terminate the simulator process immediately. The obvious way to do this is to call Kill() on the Child Process object. This gives an exception if it's done after the process has exited, but I can test whether the process is exited before trying to kill it, right? OK, so I do the following when the button is clicked: If Not Child.HasExited Then Child.Kill() Button_Stop.Enabled = False End If However, what if the process happens to exit between the test and the call to Kill()? In that case, I get an exception. The next thing to occur to me was that I can do Button_Stop.Enabled = False in the Process.Exited event handler, and thus prevent the Child.Kill() call in the Button_Stop.Clicked handler. But since the Process.Exited handler is called on a different thread, that still leaves the following possible interleaving: Child process exits. Process.Exited fires, calls Invoke to schedule the Button_Stop.Enabled = False User clicks on Button_Stop, triggering Child.Kill() Button_Stop.Enabled = False actually happens. An exception would then be thrown on step 3. How do I kill the process without any race conditions? Am I thinking about this entirely wrong?

    Read the article

  • SmtpClient.SendAsync - How to ensure my application doesn't finish before callback?

    - by James
    Hi, I need to send emails asychronously through a console application. I need to do some DB updates on the callback but my application is exiting before the callback code gets run! How can I stop this from happening in a nice manner rather than simply guessing how long to wait before exiting. I would imagine the Async calls get placed in some form of thread? Is it possible to check if any are waiting to be called? Sample Code private static void SendCompletedCallback(object sender, AsyncCompletedEventArgs e) { // Get the unique identifier for this asynchronous operation. String token = (string) e.UserState; if (e.Cancelled) { Console.WriteLine("[{0}] Send canceled.", token); } if (e.Error != null) { Console.WriteLine("[{0}] {1}", token, e.Error.ToString()); } else { // update DB Console.WriteLine("Message sent."); } } public static void Main(string[] args) { var users = Repository.GetUsers(); SmtpClient client = new SmtpClient("Host"); client.SendCompleted += new SendCompletedEventHandler(SendCompletedCallback); MailAddress from = new MailAddress("[email protected]", "System", Encoding.UTF8); foreach (var user in users) { MailAddress to = new MailAddress(user.Email); MailMessage message = new MailMessage(from, to); message.Body = "This is a test"; message.BodyEncoding = System.Text.Encoding.UTF8; message.Subject = "test message 1" + someArrows; message.SubjectEncoding = System.Text.Encoding.UTF8; string userState = String.Format("Message for user id {0}", user.ID); client.SendAsync(message, userState); message.Dispose(); } // need to wait here until I have received a callback for each message // otherwise the application will exit }

    Read the article

  • Detecting duplicate values in a column of a Datatable while traversing through It

    - by Ashish Gupta
    I have a Datatable with Id(guid) and Name(string) columns. I traverse through the data table and run a validation criteria on the Name (say, It should contain only letters and numbers) and then adding the corresponding Id to a List If name passes the validation. Something like below:- List<Guid> validIds=new List<Guid>(); foreach(DataRow row in DataTable1.Rows) { if(IsValid(row["Name"]) { validIds.Add((Guid)row["Id"]); } } In addition to this validation I should also check If the name is not repeating in the whole datatable (even for the case-sensitiveness), If It is repeating, I should not add the corresponding Id in the List. Things I am thinking/have thought about:- 1) I can have another List, check for the "Name" in the same, If It exists, will add the corresponding Guild 2) I cannot use HashSet as that would treat "Test" and "test" as different strings and not duplicates. 3) Take the DataTable to another one where I have the disctict names (this I havent tried and the code might be incorrect, please correct me whereever possible) DataTable dataTableWithDistinctName = new DataTable(); dataTableWithDistinctName.CaseSensitive=true CopiedDataTable=DataTable1.DefaultView.ToTable(true,"Name"); I would loop through the original datatable and check the existence of the "Name" in the CopiedDataTable, If It exists, I wont add the Id to the List. Are there any better and optimum way to achieve the same? I need to always think of performance. Although there are many related questions in SO, I didnt find a problem similar to this. If you could point me to a question similar to this, It would be helpful. Thanks

    Read the article

  • How do I manage application configuration in ASP.NET?

    - by GlennS
    I am having difficulty with managing configuration of an ASP.Net application to deploy for different clients. The sheer volume of different settings which need twiddling takes up large amounts of time, and the current configuration methods are too complicated to enable us to push this responsibility out to support partners. Any suggestions for better methods to handle this or good sources of information to research? How we do things at present: Various xml configuration files which are referenced in Web.Config, for example an AppSettings.xml. Configurations for specific sites are kept in duplicate configuration files. Text files containing lists of data specific to the site In some cases, manual one-off changes to the database C# configuration for Windsor IOC. The specific issues we are having: Different sites with different features enabled, different external services we have to talk to and different business rules. Different deployment types (live, test, training) Configuration keys change across versions (get added, remove), meaning we have to update all the duplicate files We still need to be able to alter keys while the application is running Our current thoughts on how we might approach this are: Move the configuration into dynamically compiled code (possibly Boo, Binsor or JavaScript) Have some form of diffing/merging configuration: combine a default config with a live/test/training config and a site-specific config

    Read the article

  • SQL Compact allow only one WCF Client

    - by Andreas Hoffmann
    Hi, I write a little Chat Application. To save some infos like Username and Password I store the Data in an SQL-Compact 3.5 SP1 Database. Everything working fine, but If another (the same .exe on the same machine) Client want to access the Service. It came an EndpointNotFound exception, from the ServiceReference.Class.Open() at the Second Client. So i remove the CE Data Access Code and I get no Error (with an if (false)) Where is the Problem? I googled for this, but no one seems the same error I get :( SOLUTION I used the wrapper in http://csharponphone.blogspot.com/2007/01/keeping-sqlceconnection-open-and-thread.html for threat safty, and now it works :) Client Code: public test() { var newCompositeType = new Client.ServiceReference1.CompositeType(); newCompositeType.StringValue = "Hallo" + DateTime.Now.ToLongTimeString(); newCompositeType.Save = (Console.ReadKey().Key == ConsoleKey.J); ServiceReference1.Service1Client sc = new Client.ServiceReference1.Service1Client(); sc.Open(); Console.WriteLine("Save " + newCompositeType.StringValue); sc.GetDataUsingDataContract(newCompositeType); sc.Close(); } Server Code public CompositeType GetDataUsingDataContract(CompositeType composite) { if (composite.Save) { SqlCeConnection con = new SqlCeConnection(Properties.Settings.Default.Con); con.Open(); var com = con.CreateCommand(); com.CommandText = "SELECT * FROM TEST"; SqlCeResultSet result = com.ExecuteResultSet(ResultSetOptions.Scrollable | ResultSetOptions.Updatable); var rec = result.CreateRecord(); rec["TextField"] = composite.StringValue; result.Insert(rec); result.Close(); result.Dispose(); com.Dispose(); con.Close(); con.Dispose(); } return composite; }

    Read the article

  • OpenGL bitmap text fails after drawing polygon

    - by kaykun
    I'm using Win32 and OpenGL to to draw text onto a window. I'm using the bitmap font method, with wglUseFontBitmaps. Here is my main rendering function: glClear(GL_COLOR_BUFFER_BIT); glPushMatrix(); glColor3f(1.0f, 0.0f, 1.0f); glBegin(GL_QUADS); glVertex2f(0.0f, 0.0f); glVertex2f(128.0f, 0.0f); glVertex2f(128.0f, 128.0f); glVertex2f(0.0f, 128.0f); glEnd(); glPopMatrix(); glPushMatrix(); glColor3f(1.0f, 1.0f, 1.0f); glRasterPos2i(200, 200); glListBase(fontList); glCallLists(5, GL_UNSIGNED_BYTE, "Test."); glPopMatrix(); SwapBuffers(hDC); As you can see it's very simple and the only thing that it's supposed to do is draw a quadrilateral and draw the text "Test.". But the problem is that drawing a polygon seems to mess up any text operations I try to do after it. If I place the text drawing functions before the polygon, both the text and the polygon draw fine. Is there something I'm missing here? Edit: This problem only happens when the window is run in Fullscreen, by ChangeDisplaySettings. Any reason why this would be??

    Read the article

  • Elmah on a WebServer

    - by Subbarao
    I Setup Elmah to work on a website. It works fine on my local machine but when I moved it to a web server I get this exception Configuration Error Description: An error occurred during the processing of a configuration file required to service this request. Please review the specific error details below and modify your configuration file appropriately. Parser Error Message: Unrecognized configuration section elmah/security. Source Error: Line 110: </connectionStrings> Line 111: <elmah> Line 112: <security allowRemoteAccess="1" /> Line 113: <errorLog type="Elmah.SqlErrorLog, Elmah" connectionStringName="CadaretGrantConnectionString"/> Line 114: <!-- Don't log 404 --> It shows me a error at Line 112. What should be done in order to get Elmah to work with remote access? Below is my configuration <elmah> <security allowRemoteAccess="1" /> <errorLog type="Elmah.SqlErrorLog, Elmah" connectionStringName="ConnectionString"/> <!-- Don't log 404 --> <errorFilter> <test> <equal binding="HttpStatusCode" value="404" valueType="Int32"/> </test> </errorFilter> </elmah>

    Read the article

  • Calling web service using javascript error

    - by politrok
    Hi , I try to call webservice using json , when i call to web service without excepted parameters , its work , but when i'm try to send parameter , ive got an error : This is my code: function GetSynchronousJSONResponse(url, postData) { var xmlhttp = null; if (window.XMLHttpRequest) xmlhttp = new XMLHttpRequest(); else if (window.ActiveXObject) { if (new ActiveXObject("Microsoft.XMLHTTP")) xmlhttp = new ActiveXObject("Microsoft.XMLHTTP"); else xmlhttp = new ActiveXObject("Msxml2.XMLHTTP"); } url = url + "?rnd=" + Math.random(); // to be ensure non-cached version xmlhttp.open("POST", url, false); xmlhttp.setRequestHeader("Content-Type", "application/json; charset=utf-8"); xmlhttp.send(postData); var responseText = xmlhttp.responseText; return responseText; } function Test() { var result = GetSynchronousJSONResponse('http://localhost:1517/Mysite/Myservice.asmx/myProc', '{"MyParam":"' + 'test' + '"}'); result = eval('(' + result + ')'); alert(result.d); } This is the error : System.InvalidOperationException: Request format is invalid: application/json; charset=utf-8. at System.Web.Services.Protocols.HttpServerProtocol.ReadParameters() at System.Web.Services.Protocols.WebServiceHandler.CoreProcessRequest What is wrong ? Thanks in advance .

    Read the article

  • xsd.exe - schema to class - for use with WCF

    - by NealWalters
    I have created a schema as an agreed upon interface between our company and an external company. I am now creating a WCF C# web service to handle the interface. I ran the XSD utility and it created a C# class. The schema was built in BizTalk, and references other schemas, so all-in-all there are over 15 classes being generated. I put [DataContract} attribute in front of each of the classes. Do I have to put the [DataMember] attribute on every single property? When I generate a test client program, the proxy does not have any code for any of these 15 classes. We used to use this technique when using .asmx services, but not sure if it will work the same with WCF. If we change the schema, we would want to regenerate the WCF class, and then we would haev to each time redecorate it with all the [DataMember] attributes? Is there an newer tool similar to XSD.exe that will work better with WCF? Thanks, Neal Walters SOLUTION (buried in one of Saunders answer/comments): Add the XmlSerializerFormat to the Interface definition: [OperationContract] [XmlSerializerFormat] // ADD THIS LINE Transaction SubmitTransaction(Transaction transactionIn); Two notes: 1) After I did this, I saw a lot more .xsds in the my proxy (Service Reference) test client program, but I didn't see the new classes in my intellisense. 2) For some reason, until I did a build on the project, I didn't get all the classes in the intellisense (not sure why).

    Read the article

  • Jquery $.get against ASP.NET MVC not working in Opera and Firefox

    - by codelove
    Let me first put the code snippets here. I am just using the ASP.NET MVC project Visual Studio creates out of the box. So I am just putting the snippets I added to it: Site.master's head section: <head runat="server"> <title><asp:ContentPlaceHolder ID="TitleContent" runat="server" /></title> <script src="../../Scripts/jquery-1.3.2.min.js" type="text/javascript"></script> <link href="../../Content/Site.css" rel="stylesheet" type="text/css" /> <script src="../../Scripts/test.js" type="text/javascript"></script> <script type="text/javascript">$(document).ready(ready);</script> </head> Content of test.js: function ready(){ $.get("/Home/TestAjax", ajaxResponse); } function ajaxResponse(data){ alert("got response from server: " + data); } Method in HomeController: public String TestAjax() { if (Request.IsAjaxRequest()) { return "Got ajax request!"; } else { return "Non-ajax request"; } } Now the problem I am seeing in Firefox 3.5.30729 (Firebug) is when the ajax request goes out, IIS 7 on the remote box sends Http 302 back, which does the redirect and forces another get request, but it is not asynchronous. Opera also doesn't work so I assume it is the same problem. However, above code works just fine in IE 8, Chrome, and Safari. On localhost all of the above browsers work as expected including Firefox and Opera -- they all receive "Got ajax request!" as the response from the server. Anybody have any ideas what's going on here and how to fix it? I am looking for a real solution or at least explanation as to what is going on and why.

    Read the article

  • Windows Sidebar gadget not working in vista home premium(ie 64-bit OS)

    - by stanley
    Hi All, I have developed a windows sidebar gadget which plays videos in a flash player, It works in vista home basic(32-bit OS) but doesn't work in vista home premium(64-bit OS). I use Flash Player 9 and Actionscript 3.0. Can anyone help me Please. ***This is the html content for the player*** <object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0" width="130" height="200" id="FLVPlayer"> <param name="movie" value="test.swf" /> <param name="salign" value="lt" /> <param name="quality" value="high" /> <param name="scale" value="noscale" /> <param name="FlashVars" value="&MM_ComponentVersion=1&skinName=Clear_Skin_1&streamName=2973&autoPlay=true&autoRewind=true" /> <embed src="test.swf" flashvars="&MM_ComponentVersion=1&skinName=Clear_Skin_1&streamName=2973&autoPlay=true&autoRewind=true" quality="high" scale="noscale" width="130" height="200" name="FLVPlayer" salign="LT" type="application/x-shockwave-flash" pluginspage="http://www.adobe.com/shockwave/download/download.cgi?P1_Prod_Version=ShockwaveFlash" />

    Read the article

  • DrScheme versus mzscheme: treatment of definitions

    - by speciousfool
    One long term project I have is working through all the exercises of SICP. I noticed something a bit odd with the most recent exercise. I am testing a Huffman encoding tree. When I execute the following code in DrScheme I get the expected result: (a d a b b c a) However, if I execute this same code in mzscheme by calling (load "2.67.scm") or by running mzscheme -f 2.67.scm, it reports: symbols: expected symbols as arguments, given: (leaf D 1) My question is: why? Is it because mzscheme and drscheme use different rules for loading program definitions? The program code is below. ;; Define an encoding tree and a sample message ;; Use the decode procedure to decode the message, and give the result. (define (make-leaf symbol weight) (list 'leaf symbol weight)) (define (leaf? object) (eq? (car object) 'leaf)) (define (symbol-leaf x) (cadr x)) (define (weight-leaf x) (caddr x)) (define (make-code-tree left right) (list left right (append (symbols left) (symbols right)) (+ (weight left) (weight right)))) (define (left-branch tree) (car tree)) (define (right-branch tree) (cadr tree)) (define (symbols tree) (if (leaf? tree) (list (symbol-leaf tree)) (caddr tree))) (define (weight tree) (if (leaf? tree) (weight-leaf tree) (cadddr tree))) (define (decode bits tree) (define (decode-1 bits current-branch) (if (null? bits) '() (let ((next-branch (choose-branch (car bits) current-branch))) (if (leaf? next-branch) (cons (symbol-leaf next-branch) (decode-1 (cdr bits) tree)) (decode-1 (cdr bits) next-branch))))) (decode-1 bits tree)) (define (choose-branch bit branch) (cond ((= bit 0) (left-branch branch)) ((= bit 1) (right-branch branch)) (else (error "bad bit -- CHOOSE-BRANCH" bit)))) (define (test s-exp) (display s-exp) (newline)) (define sample-tree (make-code-tree (make-leaf 'A 4) (make-code-tree (make-leaf 'B 2) (make-code-tree (make-leaf 'D 1) (make-leaf 'C 1))))) (define sample-message '(0 1 1 0 0 1 0 1 0 1 1 1 0)) (test (decode sample-message sample-tree))

    Read the article

  • Help with SQL query (add 5% to users with conditions)

    - by Mestika
    Hi everyone, I’m having some difficulties with a query which purpose is to give users with more than one thread (called CS) in current year a 5% point “raise”. My relational schema looks like this: Thread = (threadid, threadname, threadLocation) threadoffering = (threadid, season, year, user) user = (name, points) Then, what I need is to check: WHERE thread.threadid = threadoffering.threadid AND where threadoffering.year AND threadoffering.season = currentDate AND where threadoffering.User 1 GIVE 5 % raise TO user.points I hope it is explained thoroughly but otherwise here it is in short text: Give a 5 % “point raise” to all users who has more than 1 thread in threadLocation CS in the current year and season (always dynamic, so for example now is year = 2010 and season is = spring). I am looking forward to your answer Sincerely, Emil

    Read the article

  • How do I read a secure rss feed into a SyndicationFeed without providing credentials?

    - by John Kaster
    For whatever reason, IBM uses https (without requiring credentials) for their RSS feeds. I'm trying to consume https://www.ibm.com/developerworks/mydeveloperworks/blogs/roller-ui/rendering/feed/gradybooch/entries/rss?lang=en with a .NET 4 SyndicationFeed. I can open this feed in a browser and it loads just fine. Here's the code: using (XmlReader xml = XmlReader.Create("https://www.ibm.com/developerworks/mydeveloperworks/blogs/roller-ui/rendering/feed/gradybooch/entries/rss?lang=en")) { var items = from item in SyndicationFeed.Load(xml).Items select item; } Here's the exception: System.Net.WebException was unhandled by user code Message=The remote server returned an error: (500) Internal Server Error. Source=System StackTrace: at System.Net.HttpWebRequest.GetResponse() at System.Xml.XmlDownloadManager.GetNonFileStream(Uri uri, ICredentials credentials, IWebProxy proxy, RequestCachePolicy cachePolicy) at System.Xml.XmlDownloadManager.GetStream(Uri uri, ICredentials credentials, IWebProxy proxy, RequestCachePolicy cachePolicy) at System.Xml.XmlUrlResolver.GetEntity(Uri absoluteUri, String role, Type ofObjectToReturn) at System.Xml.XmlReaderSettings.CreateReader(String inputUri, XmlParserContext inputContext) at System.Xml.XmlReader.Create(String inputUri, XmlReaderSettings settings, XmlParserContext inputContext) at System.Xml.XmlReader.Create(String inputUri) at EDN.Util.Test.FeedAggTest.LoadFeedInfoTest() in D:\cdn\trunk\CDN\Dev\Shared\net\EDN.Util\EDN.Util.Test\FeedAggTest.cs:line 126 How do I configure the reader to work with an https feed?

    Read the article

  • How to pass PHP variable as FlashVars via SWFObject

    - by Matt
    I am trying to take a PHP variable and pass it along to Flash via Flash vars. My end goal is to pass a string formatted as XML to Flash, but because I'm struggling I've stripped everything down to the basics. I'm just trying to pass a simple PHP string variable to Flash via FlashVars with SWFObject but something isn't right. The page won't load when I try to pass the variable inside of php tags, but it will load if I just pass a hard coded string. The basic structure of my page is that I have some PHP declared at the top like so: PHP <?php $test = "WTF"; ?> Some HTML (excluded here for simplicity sake) and then the JavaScript SWFObject Embed within my HTML: <script type="text/javascript" src="js/swfobject2.js"></script> <script type="text/javascript"> // <![CDATA[ var swfURL = "swfs/Init-Flash-PHP.swf"; var flashvars = {}; flashvars.theXML = <?php print $test ?>; var params = {}; //params.menu = "false"; params.scale = "showAll"; params.bgcolor = "#000000"; params.salign = "TL"; //params.wmode = "transparent"; params.allowFullScreen = "true"; params.allowScriptAccess = "always"; var attributes = {}; attributes.id = "container"; attributes.name = "container"; swfobject.embedSWF(swfURL, "container", '100%', '100%', "9.0.246", "elements/swfs/expressinstall.swf", flashvars, params, attributes); // ]]> </script> And the bare essentials of the ActionScript 3 Code: _paramObj = LoaderInfo(stage.loaderInfo).parameters; theText_txt.text = _paramObj['theXML']; How do I pass a PHP variable using SWFObject and FlashVars? Thanks.

    Read the article

  • Project setup for creating third party libraries for Android

    - by Jarle Hansen
    Hi all, I am creating a library for Android that others can include in their own project. So far I have been working on it as a normal Java project with JDK 1.6 setup as system library. This works just fine in Eclipse when I add the android.jar. The issue comes when I try to my build script. I am running Gradle and doing a normal compile and test build cycle. My thoughts were that it does not matter if I compile it with a normal JDK, since this is not a standalone application. The benefits by creating a normal Java project is that Gradle does support this much better. My project also does not contain any UI at all. However, the problem is that of course android.jar and the JDK contains lots of the same classes and I think that this is what messes up my build script. Everything crashes when running the tests (the tests are in the same project under src/test/java). My question is, how should I create this project that is meant to be included in Android projects as a third party library? Should I create it as an Android project in Eclipse even though I am only creating a library that does not use any of the UI features? Also, should the tests be in a separate project? Thanks for all responses!

    Read the article

  • Resolving a Generic with a Generic parameter in Castle Windsor

    - by Aaron Fischer
    I am trying to register a type like IRequestHandler1[GenericTestRequest1[T]] which will be implemented by GenericTestRequestHandler`1[T] but I am currently getting an error from Windsor "Castle.MicroKernel.ComponentNotFoundException : No component for supporting the service " Is this type of operation supported? Or is it to far removed from the suppored register( Component.For(typeof( IList<).ImplementedBy( typeof( List< ) ) ) below is an example of a breaking test. ////////////////////////////////////////////////////// public interface IRequestHandler{} public interface IRequestHandler<TRequest> : IRequestHandler where TRequest : Request{} public class GenericTestRequest<T> : Request{} public class GenericTestRequestHandler<T> : RequestHandler<GenericTestRequest<T>>{} [TestFixture] public class ComponentRegistrationTests{ [Test] public void DoNotAutoRegisterGenericRequestHandler(){ var IOC = new Castle.Windsor.WindsorContainer(); var type = typeof( IRequestHandler<> ).MakeGenericType( typeof( GenericTestRequest<> ) ); IOC.Register( Component.For( type ).ImplementedBy( typeof( GenericTestRequestHandler<> ) ) ); var requestHandler = IoC.Container.Resolve( typeof(IRequestHandler<GenericTestRequest<String>>)); Assert.IsInstanceOf <IRequestHandler<GenericTestRequest<String>>>( requestHandler ); Assert.IsNotNull( requestHandler ); } }

    Read the article

< Previous Page | 440 441 442 443 444 445 446 447 448 449 450 451  | Next Page >