Search Results

Search found 9410 results on 377 pages for 'simulator difference'.

Page 328/377 | < Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >

  • Does playing a Program from VS2005 cause a program to work any different than the .exe file?

    - by Bryan
    There is a program where I work that works fine when running the .exe file but works differently from expected when opened in VS2005 and played from there. I am therefore asking on here if anyone knows of anything that would work in the .exe file but not the debug from VS? I am not able to post the code for the buttons I'm talking about but I'll try to explain the best I can. There is a receiver hooked up to the computer. When the button is pressed on the program, it shows a message and waits for a signal to be received. After the signal is heard the first message box is supposed to close and another is supposed to open. When using the .exe file this happens just fine. However when playing from the program from VS2005 (the same one from which the .exe was made) the second message doesn't come up when it is supposed to and when I can make it come up, the first box doesn't close. There is also a timer involved if that helps. Also, is there a fundamental difference between how the two operate when executing the program? If I need to make anything more clear or give more details please let me know.

    Read the article

  • Searching in graphs trees with Depth/Breadth first/A* algorithms

    - by devoured elysium
    I have a couple of questions about searching in graphs/trees: Let's assume I have an empty chess board and I want to move a pawn around from point A to B. A. When using depth first search or breadth first search must we use open and closed lists ? This is, a list that has all the elements to check, and other with all other elements that were already checked? Is it even possible to do it without having those lists? What about A*, does it need it? B. When using lists, after having found a solution, how can you get the sequence of states from A to B? I assume when you have items in the open and closed list, instead of just having the (x, y) states, you have an "extended state" formed with (x, y, parent_of_this_node) ? C. State A has 4 possible moves (right, left, up, down). If I do as first move left, should I let it in the next state come back to the original state? This, is, do the "right" move? If not, must I transverse the search tree every time to check which states I've been to? D. When I see a state in the tree where I've already been, should I just ignore it, as I know it's a dead end? I guess to do this I'd have to always keep the list of visited states, right? E. Is there any difference between search trees and graphs? Are they just different ways to look at the same thing?

    Read the article

  • Questions about the Backpropogation Algorithm

    - by Colemangrill
    I have a few questions concerning backpropogation. I'm trying to learn the fundamentals behind neural network theory and wanted to start small, building a simple XOR classifier. I've read a lot of articles and skimmed multiple textbooks - but I can't seem to teach this thing the pattern for XOR. Firstly, I am unclear about the learning model for backpropogation. Here is some pseudo-code to represent how I am trying to train the network. [Lets assume my network is setup properly (ie: multiple inputs connect to a hidden layer connect to an output layer and all wired up properly)]. SET guess = getNetworkOutput() // Note this is using a sigmoid activation function. SET error = desiredOutput - guess SET delta = learningConstant * error * sigmoidDerivative(guess) For Each Node in inputNodes For Each Weight in inputNodes[n] inputNodes[n].weight[j] += delta; // At this point, I am assuming the first layer has been trained. // Then I recurse a similar function over the hidden layer and output layer. // The prime difference being that it further divi's up the adjustment delta. I realize this is probably not enough to go off of, and I will gladly expound on any part of my implementation. Using the above algorithm, my neural network does get trained, kind of. But not properly. The output is always XOR 1 1 [smallest number] XOR 0 0 [largest number] XOR 1 0 [medium number] XOR 0 1 [medium number] I can never train the [1,1] [0,0] to be the same value. If you have any suggestions, additional resources, articles, blogs, etc for me to look at I am very interested in learning more about this topic. Thank you for your assistance, I appreciate it greatly!

    Read the article

  • C++/Win32 : XP Visual Styles - no controls are showing up?

    - by mrl33t
    Okay, so i'm pretty new to C++ & the Windows API and i'm just writing a small application. I wanted my application to make use of visual styles in both XP, Vista and Windows 7 so I added this line to the top of my code: #pragma comment(linker,"\"/manifestdependency:type='win32' name='Microsoft.Windows.Common-Controls' version='6.0.0.0' processorArchitecture='*' publicKeyToken='6595b64144ccf1df' language='*'\"") It seemed to work perfectly on my Windows 7 machine and also Vista machine. But when I tried the application on XP the application wouldn't load any controls (e.g. buttons, labels etc.) - not even messageboxes would display. This image shows a small test application which i've just put together to demonstrate what i'm trying to explain: http://img704.imageshack.us/img704/2250/myapp.png In this test application i'm not using any particularly fancy or complicated code. I've effectively just taken the most basic sample code from the MSDN Library (http://msdn.microsoft.com/en-us/library/ff381409.aspx) and added a section to the WM_CREATE message to create a button: MyBtn = CreateWindow(L"Button", L"My Button", BS_PUSHBUTTON | WS_CHILD | WS_VISIBLE, 25, 25, 100, 30, hWnd, NULL, hInst, 0); But I just can't figure out what's going on and why its not working. Any ideas guys? Thank you in advanced. (By the way the application works in XP if i remove the manifest section from the top - obviously without visual styles though. I should also probably mention that the app was built using Visual C++ 2010 Express on a Windows 7 machine - if that makes a difference?)

    Read the article

  • iOS UIImageView dataWithContentsOfURL returning empty

    - by user761389
    I'm trying to display an image from a URL in a UIImageView and I'm seeing some very peculiar results. The bit of code that I'm testing with is below imageURL = @"http://images.shopow.co.uk/image/user_dyn/1073/32/32"; imageURL = @"http://images.shopow.co.uk/assets/profile_images/default/32_32/avatar-male-01.jpg"; NSURL *imageURLRes = [NSURL URLWithString:imageURL]; NSData *imageData = [NSData dataWithContentsOfURL:imageURLRes]; UIImage *image = [UIImage imageWithData:imageData]; NSLog(@"Image Data: %@", imageData); In it's current form I can see data in the output window which is what I'd expect. However if comment out the second imageURL so I'm referencing the first I'm getting empty data and therefore nil is being returned by imageWithData. What is possibly more confusing is that the first image is basically the same as the second but it's been through a PHP processing script. I'm nearly certain that it isn't the script that's causing the issue because if I use this instead imageURL = @"http://images.shopow.co.uk/image/product_dynimg/389620/32/32" the image is displayed and this uses the same image processing script. I'm struggling to find any difference in the images that would cause this to occur. Any help would be appreciated.

    Read the article

  • Setting minOccurs="0" (not required) on web service parameters of type int

    - by Alex Angas
    I have an ASP.NET 2.0 web method with the following signature: [WebMethod] public QueryResult[] GetListData( string url, string list, string query, int noOfItems, string titleField) I'm running the disco.exe tool to generate .wsdl and .disco files from this web service for use in SharePoint. The following WSDL for the parameters is being generated: <s:element minOccurs="0" maxOccurs="1" name="url" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="list" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="query" type="s:string" /> <s:element minOccurs="1" maxOccurs="1" name="noOfItems" type="s:int" /> <s:element minOccurs="0" maxOccurs="1" name="titleField" type="s:string" /> Why does the int parameter have minOccurs set to 1 instead of 0 and how do I change it? I've tried the following without success: [XmlElementAttribute(IsNullable=false)] in the parameter declaration: makes no difference (as expected when you think about it) [XmlElementAttribute(IsNullable=true)] in the parameter declaration: gives error "IsNullable may not be 'true' for value type System.Int32. Please consider using Nullable instead." changing the parameter type to int? : keeps minOccurs="1" and adds nillable="true" [XmlIgnore] in the parameter declaration: the parameter is never output to the WSDL at all

    Read the article

  • "Exception in thread "main" java.lang.NoSuchMethodError Do i miss library in the installed JDk?

    - by Ahmad
    Hello every body I was using the JDK very well writing the code then i use "javac" to compile it then "java" to run it But Recently when i write a code and compile it then if i try to run it this exception appears to me "Exception in thread "main" java.lang.NoSuchMethodError " at first i thought there is something wrong in my code , i searched in the internet for a solving for this problem but i didn't find anything may help me Then i try to run the "HelloWorld" example i made it before, it runs i copied the code and pasted it in another file and changed the name to "HelloWorld2" and compile it by "javac" and tried to run it by "java" the same exception appears i was surprised why? it is the same code then i used the "javap" which decompile the code with both i found this difference in the first one (the old one) "public static void main(java.lang.String[])"; but in the second (the new one) "public static void main(String[])"; without java.lang then i compiled the old one which works and runs by "javac" and when i try to run it, it didn't run and give me the same exception i tried with some of my old codes it run and when i compile it by "javac" it doesn't work I searched to find a solution to this problem and i found nothing

    Read the article

  • XML configuration of Zend_Form: child nodes and attributes not always equal?

    - by Cez
    A set of forms (using Zend_Form) that I have been working on were causing me some headaches trying to figure out what was wrong with my XML configuration, as I kept getting unexpected HTML output for a particular INPUT element. It was supposed to be getting a default value, but nothing appeared. It appears that the following 2 pieces of XML are not equal when used to instantiate Zend_Form: Snippet #1: <form> <elements> <test type="hidden"> <options ignore="true" value="foo"/> </test> </elements> </form> Snippet #2: <form> <elements> <test type="hidden"> <options ignore="true"> <value>foo</value> </options> </test> </elements> </form> The type of the element doesn't appear to make a difference, so it doesn't appear to be related to hidden fields. Is this expected or not?

    Read the article

  • How does SVN store commit time

    - by Salman
    I am working on a project that involves extracting details from a SVN server using SVNKit. My project is already complete and has been working we for a while now. During the testing, I noticed something rather very strange. the Commit Times my extract data seems is alway different from whats there in SVN Logs. I couldnt find any code in my project that could be inducing this difference but now I am looking as to how SVN server stores the Commit time in itself. As we have developer working from different part of the world thus resulting in different timezones, I was thinking that SVN might be storing time after converting them to GMT or timezone of the system on which SVN server is running. But that does not seem to be happening. Instead the times are stored as per the time when the commit was done and in that local timezone itself. I have been unable to find any substantial document on internet to support my theory so far. Can anybody in brief explain as how SVN store the Commit Time for each change? Documentaion links referring to this will be of great help.

    Read the article

  • Dereferencing the null pointer

    - by zilgo
    The standard says that dereferencing the null pointer leads to undefined behaviour. But what is "the null pointer"? In the following code, what we call "the null pointer": struct X { static X* get() { return reinterpret_cast<X*>(1); } void f() { } }; int main() { X* x = 0; (*x).f(); // the null pointer? (1) x = X::get(); (*x).f(); // the null pointer? (2) x = reinterpret_cast<X*>( X::get() - X::get() ); (*x).f(); // the null pointer? (3) (*(X*)0).f(); // I think that this the only null pointer here (4) } My thought is that dereferencing of the null pointer takes place only in the last case. Am I right? Is there difference between compile time null pointers and runtime according to C++ Standard?

    Read the article

  • avoiding code duplication in Rails 3 models

    - by Dustin Frazier
    I'm working on a Rails 3.1 application where there are a number of different enum-like models that are stored in the database. There is a lot of identical code in these models, as well as in the associated controllers and views. I've solved the code duplication for the controllers and views via a shared parent controller class and the new view/layout inheritance that's part of Rails 3. Now I'm trying to solve the code duplication in the models, and I'm stuck. An example of one of my enum models is as follows: class Format < ActiveRecord::Base has_and_belongs_to_many :videos attr_accessible :name validates :name, presence: true, length: { maximum: 20 } before_destroy :verify_no_linked_videos def verify_no_linked_videos unless self.videos.empty? self.errors[:base] << "Couldn't delete format with associated videos." raise ActiveRecord::RecordInvalid.new self end end end I have four or five other classes with nearly identical code (the association declaration being the only difference). I've tried creating a module with the shared code that they all include (which seems like the Ruby Way), but much of the duplicate code relies on ActiveRecord, so the methods I'm trying to use in the module (validate, attr_accessible, etc.) aren't available. I know about ActiveModel, but that doesn't get me all the way there. I've also tried creating a common, non-persistent parent class that subclasses ActiveRecord::Base, but all of the code I've seen to accomplish this assumes that you won't have subclasses of your non-persistent class that do persist. Any suggestions for how best to avoid duplicating these identical lines of code across many different enum models?

    Read the article

  • Call .NET Webservice with Android

    - by Lasse P
    Hi, I know this question has been asked here before, but I don't think those answers were adequate for my needs. We have a SOAP webservice that is used for an iPhone application, but it is possible that we need an Android specific version or a proxy of the service, so we have the option to go with either SOAP or JSON. I have a few concerns about both methods: SOAP solution: Is it possible to generate java source code from a WSDL file, if so, will it include some kind of proxy class to invoke the webservice and will it work in the Android environment at all? Google has not provided any SOAP library in Android, so i need to use 3rd party, any suggestion? What about the performance/overhead with parsing and transmitting SOAP xml over the wire versus the JSON solution? JSON solution: There is a few classes in the Android sdk that will let me parse JSON, but does it support generic parsing, like if I want the result to be parsed as a complex type? Or would I need to implement that myself? I have read about 2 libraries before here on Stackoverflow, GSON an Jackson. What is the difference performance and usability (from a developers perspective) wise? Do you guys have any experince with either of those libraries? So i guess the big question is, what method to go with? I hope you can help me out. Thanks in advance :-)

    Read the article

  • Why would the same web form control render different "onclick" logic based on the page structure?

    - by zk812
    It was hard to title this question. :) I have a user control that is used to display a list of lookup items in my database and a delete button for each item (Editor User Control below). The delete button has an onClientClick event used to display a confirmation dialog in JavaScript. On page one, the confirmation pops up and functions correctly. The overall structure is: Master Page Page Editor User Control List of items with delete button On page two, the confirmation pops up but regardless of the answer, the page posts back anyway. The structure of this page is: Master Page Page User Control Editor User Control List of items with delete button For some reason, this makes a difference in how the delete button is rendered. Page one: <input type="image" name="ctl00...RequestTypesDataList$ctl01$ctl01" src="Images/Disable.png" alt="Delete" onclick="return ProcessDeleteCommand(1);" /> Page two: <input type="image" name="ctl00...RequestTypesDataList$ctl07$ctl01" src="Images/Disable.png" alt="Delete" onclick="return ProcessDeleteCommand(2);WebForm_DoPostBackWithOptions(new WebForm_PostBackOptions(&quot;ctl00$ContentPlaceHolder1$RequestCreator1$RequestTypeEditor1$RequestTypesDataList$ctl07$ctl01&quot;, &quot;&quot;, true, &quot;&quot;, &quot;&quot;, false, false))" /> Does anyone know why page two renders WebForm_DoPostBackWithOptions after my JS check? It's causing the postback regardless of the confirmation choice.

    Read the article

  • iBatis not populating object when there are no rows found.

    - by Omnipresent
    I am running a stored procedure that returns 2 cursors and none of them have any data. I have the following mapping xml: <resultMap id="resultMap1" class="HashMap"> <result property="firstName" columnIndex="2"/> </resultMap> <resultMap id="resultMap2" class="com.somePackage.MyBean"> <result property="unitStreetName" column="street_name"/> </resultMap> <parameterMap id="parmmap" class="map"> <parameter property="id" jdbcType="String" javaType="java.lang.String" mode="IN"/> <parameter property="Result0" jdbcType="ORACLECURSOR" javaType="java.sql.ResultSet" mode="OUT" resultMap="resultMap1"/> <parameter property="Result1" jdbcType="ORACLECURSOR" javaType="java.sql.ResultSet" mode="OUT" resultMap="resultMap2"/> </parameterMap> <procedure id="proc" parameterMap="parmmap"> { call my_sp (?,?,?) } </procedure> First result set is being put in a HashMap...second resultSet is being put in a MyBean class. code in my DAO follows: HashMap map = new HashMap() map.put("id", "1234"); getSqlMapClientTemplate().queryForList("mymap.proc", map); HashMap result1 = (HashMap)((List)parmMap.get("Result0")).get(0); MyBean myObject = (MyBean)((List)parmMap.get("Result1")).get(0);//code fails here in the last line above..my code fails. It fails because second cursor has no rows and thats why nothing is put into the list. However, first cursor returns nothing as well but since results are being put into a HashMap the list for first cursor atleast has HashMap object inside it.. Why this difference? is there a way to make iBatis put an object of MyBean inside the list even if there are no rows returned? Or should I be handling this in my DAO...I want to avoid handling it in the DAO because I have whole bunch of DAO's like these.

    Read the article

  • C# performance methods of receiving data from a socket?

    - by Daniel
    Lets assume we have a simple internet socket, and its going to send 10 megabytes (because i want to ignore memory issues) of random data through. Is there any performance difference or a best practice method that one should use for receiving data? The final output data should be represented by a byte[]. Yes i know writing an arbitrary amount of data to memory is bad, and if I was downloading a large file i wouldn't be doing it like this. But for argument sake lets ignore that and assume its a smallish amount of data. I also realise that the bottleneck here is probably not the memory management but rather the socket receiving. I just want to know what would be the most efficient method of receiving data. A few dodgy ways can think of is: Have a List and a buffer, after the buffer is full, add it to the list and at the end list.ToArray() to get the byte[] Write the buffer to a memory stream, after its complete construct a byte[] of the stream.Length and read it all into it in order to get the byte[] output. Is there a more efficient/better way of doing this?

    Read the article

  • Inline function v. Macro in C -- What's the Overhead (Memory/Speed)?

    - by Jason R. Mick
    I searched Stack Overflow for the pros/cons of function-like macros v. inline functions. I found the following discussion: Pros and Cons of Different macro function / inline methods in C ...but it didn't answer my primary burning question. Namely, what is the overhead in c of using a macro function (with variables, possibly other function calls) v. an inline function, in terms of memory usage and execution speed? Are there any compiler-dependent differences in overhead? I have both icc and gcc at my disposal. My code snippet I'm modularizing is: double AttractiveTerm = pow(SigmaSquared/RadialDistanceSquared,3); double RepulsiveTerm = AttractiveTerm * AttractiveTerm; EnergyContribution += 4 * Epsilon * (RepulsiveTerm - AttractiveTerm); My reason for turning it into an inline function/macro is so I can drop it into a c file and then conditionally compile other similar, but slightly different functions/macros. e.g.: double AttractiveTerm = pow(SigmaSquared/RadialDistanceSquared,3); double RepulsiveTerm = pow(SigmaSquared/RadialDistanceSquared,9); EnergyContribution += 4 * Epsilon * (RepulsiveTerm - AttractiveTerm); (note the difference in the second line...) This function is a central one to my code and gets called thousands of times per step in my program and my program performs millions of steps. Thus I want to have the LEAST overhead possible, hence why I'm wasting time worrying about the overhead of inlining v. transforming the code into a macro. Based on the prior discussion I already realize other pros/cons (type independence and resulting errors from that) of macros... but what I want to know most, and don't currently know is the PERFORMANCE. I know some of you C veterans will have some great insight for me!!

    Read the article

  • OpenGL fast texture drawing with vertex buffer objects. Is this the way to do it?

    - by Matthew Mitchell
    Hello. I am making a 2D game with OpenGL. I would like to speed up my texture drawing by using VBOs. Currently I am using the immediate mode. I am generating my own coordinates when I rotate and scale a texture. I also have the functionality of rounding the corners of a texture, using the polygon primitive to draw those. I was thinking, would it be fastest to make a VBO with vertices for the sides of the texture with no offset included so I can then use glViewport, glScale (Or glTranslate? What is the difference and most suitable here?) and glRotate to move the drawing position for my texture. Then I can use the same VBO with no changes to draw the texture each time. I could only change the VBO when I need to add coordinates for the rounded corners. Is that the best way to do this? What things should I look out for while doing it? Is it really fastest to use GL_TRIANGLES instead of GL_QUADS in modern graphics cards? Thank you for any answer.

    Read the article

  • how to programatically compare permissions of login/user in sql server 2005

    - by titanium
    There's a login/user in SQL Server who is having a problem importing accounts in production server. I don't have an idea what method he is doing this. According to the one importing, this import is working fine in development server. But when he did the same import in production it is giving him errors. Below are the errors he is getting for each accounts. 2009-06-05 18:01:05.8254 ERROR [engine-1038] Task [1038:00001 - Members]: Step 1.0 [<Insert step description>]: Task.RunStep(): StoreRow has failed 2009-06-05 18:01:05.9035 ERROR [engine-1038] Task [1038:00001 - Members]: Step 1.0 [<Insert step description>]: Task.RunStep(): StoreRow exception: Exception caught while storing Data. [Microsoft][ODBC SQL Server Driver][SQL Server]'ACCOUNT1' is not a valid login or you do not have permission. Please note that 'ACCOUNT1' is not the real account name. I just changed it for security reason. Using SQL Server Management Studio (SSMS), I viewed/checked the permissions of the user/login who is performing the import from development server and production for comparison. I found no difference. My question is: Is there a way to programmatically query permissions in server and database level of a particular login/user so I can compare/contrast for any differences?

    Read the article

  • JSR-299 (CDI) configuration at runtime

    - by nsn
    I need to configure different @Alternatives, @Decorators and @Injectors for different runtime environments (think testing, staging and production servers). Right now I use maven to create three wars, and the only difference between those wars are in the beans.xml files. Is there a better way to do this? I do have @Alternative @Stereotypes for the different environments, but even then I need to alter beans.xml, and they don't work for @Decorators (or do they?) Is it somehow possible to instruct CDI to ignore the values in beans.xml and use a custom configuration source? Because then I could for example read a system property or other environment variable. The application exclusively runs in containers that use Weld, so a weld-specific solution would be ok. I already tried to google this but can't seem to find good search terms, and I asked the Weld-Users-Forums, but to no avail. Someone over there suggested to write my own custom extension, but I can't find any API to actually change the container configuration at runtime. I think it would be possible to have some sort of @ApplicationScoped configuration bean and inject that into all @Decorators which could then decide themselves whether they should be active or not and then in order to configure @Alternatives write @Produces methods for every interface with multiple implementations and inject the config bean there too. But this seems to me like a lot of unnecessary work to essentially duplicate functionality already present in CDI?

    Read the article

  • Why does PEAR get installed to my user directory?

    - by webworm
    I am new to Linux and I am attempting to install the PHP PEAR library on a virtual server which is running Ubuntu. I am following a tutorial that covers installing PEAR but have run up against an area where I am confused. When running the PEAR installation program I am prompted as to what I want the INSTALL_PREFIX to be. Evidently the INSTALL_PREFIX, among other things, determines where PEAR will be installed. The tutorial suggest the value of INSTALL_PREFIX be the following path ... "/home/MY_USER_NAME/pear" where MY_USER_NAME = my user account Having come from a Windows world, applications are installed on the system where everyone can use them. If I install PEAR underneath my user directory will other developers on the system be able to make use of PEAR in their PHP scripts? I want to make PEAR available to all users and not just myself. Could someone explain to me the difference between installing for all users and installing just for myself? Does the install location matter? Should I be installing PEAR in a different location? Thanks for any suggestions. P.S. The tutorial I am following is located at the following URL ... http://articles.sitepoint.com/article/getting-started-with-pear/2

    Read the article

  • IE8: weird border around HTML button element

    - by s427
    I have a button element with a custom background (image+color) and no borders except for a 2px border-bottom (and a bunch of other properties --code below) which renders quite differently in Firefox and in IE8. The problem is, this is a work for a company that uses IE8 as their only browser, so it's important that the button renders well in IE8. Here's a visual comparison between the two: My question here is not about the padding difference (I'm looking into that), but about the weird border that is visible on IE8 in addition to the regular border (border-bottom). Can anyone explain to me where it comes from and how to get rid of it? Thanks in advance. Here is the HTML code: <button class="btn" id="c_edit"> <span>Annuler</span> </button> And here is the CSS: .btn { display: inline-block; margin: 0 0 7px 5px; padding: 0; color: #ddd; font-size: 14px; font-family: FrutigerLTStd55Roman, sans-serif; text-decoration: none; border: none; border-bottom: 2px solid #222; background-color: #999; background-image: url('img/btn_bg.gif'); background-position: 0 bottom; background-repeat: repeat-x; cursor: pointer; transition: all .5s ease-out; } .btn span { display: inline-block; margin: 0; padding: 8px 10px 6px 40px; background-color: transparent; background-position: 4px 0; background-repeat: no-repeat; }

    Read the article

  • HMAC URLs instead of login?

    - by Tres
    In implementing my site (a Rails site if it makes any difference), one of my design priorities is to relieve the user of the need to create yet another username and password while still providing useful per-user functionality. The way I am planning to do this is: User enters information on the site. Information is associated with the user via server-side session. User completes entering information, server sends an access URL via e-mail to the user roughly in the form of: http://siteurl/<user identifier>/<signature: HMAC(secret + salt + user identifier)> User clicks URL, site looks up user ID and salt and computes the HMAC with the server-stored secret and authenticates if the computed HMAC and signature match. My question is: is this a reasonably secure way to accomplish what I'm looking to do? Are there common attacks that would render it useless? Is there a compelling reason to abandon my desire to avoid a username/password? Is there a must-read book or article on the subject? Note that I'm not dealing with credit card numbers or anything exceedingly private, but I would still like to keep the information reasonably secure.

    Read the article

  • Update Web Reference in Visual Studio

    - by NeilD
    Hi, I have inherited a web site project that makes use of a number of WCF Web Services hosted on a BizTalk server. We have two environments that I need to deploy this project to, with different URLs for the different BizTalk servers. i.e. In the Staging environment, I need to point the services at xx.xx.xx.101 In the Live environment, I need to point them at xx.xx.xx.102, or whatever. Currently, we've got all of the URLs stored in keys in the web.config file, so that we can change them dynamically... Unfortunately this isn't working! If I change the URL in the web.config to something other than what the project was compiled with, I get an error when calling the service: Server did not recognize the value of HTTP Header SOAPAction: xx.xx.xx.101\ServiceName\MethodName I'm told that the only way they've known to deploy this is to update the web.config URLs, change all of the web references in Visual Studio to match, click on "update web reference" for each reference in Visual Studio, and then compile. It's driving me mad! I've written a pre-build NAnt script to go through and replace all instances of the URL found anywhere in the project directory, and even that isn't making any difference. There must be something else being pulled down from the service when I click the "update reference", but I'm new to working with web services, and so I'm not sure what. Does anyone have any ideas? Is there a way to do this programatically? Thanks.

    Read the article

  • OpenGL Mipmapping: how does OpenGL decide on map level?

    - by Droozle
    Hi, I am having trouble implementing mipmapping in OpenGL. I am using OpenFrameworks and have modified the ofTexture class to support the creation and rendering of mipmaps. The following code is the original texture creation code from the class (slightly modified for clarity): glEnable(texData.textureTarget); glBindTexture(texData.textureTarget, (GLuint)texData.textureID); glTexSubImage2D(texData.textureTarget, 0, 0, 0, w, h, texData.glType, texData.pixelType, data); glTexParameteri(texData.textureTarget, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(texData.textureTarget, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glDisable(texData.textureTarget); This is my version with mipmap support: glEnable(texData.textureTarget); glBindTexture(texData.textureTarget, (GLuint)texData.textureID); gluBuild2DMipmaps(texData.textureTarget, texData.glTypeInternal, w, h, texData.glType, texData.pixelType, data); glTexParameteri(texData.textureTarget, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexParameteri(texData.textureTarget, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glDisable(texData.textureTarget); The code does not generate errors (gluBuild2DMipmaps returns '0') and the textures are rendered without problems. However, I do not see any difference. The scene I render consists of "flat, square tiles" at z=0. It's basically a 2D scene. I zoom in and out by using "glScale()" before drawing the tiles. When I zoom out, the pixels of the tile textures start to "dance", indicating (as far as I can tell) unfiltered texture look-up. See: http://www.youtube.com/watch?v=b_As2Np3m8A at 25s. My question is: since I do not move the camera position, but only use scaling of the whole scene, does this mean OpenGL can not decide on the appropriate mipmap level and uses the full texture size (level 0)? Paul

    Read the article

  • non-cached RSLs in Flex?

    - by SteMa
    I have a project that is for several customers, the only difference is in the DB, everything else looks the same, except for the main page's text. That is loaded from an external swf file. I created a library, compiled it as an swc, imported it and using it as an RSL. The problem is that if once I've opened the page, and afterwards update the rsl (because changes in the text are needed), than it's already cached by the browser (not the flashplayer's cache but we shouldn't discuss this please!) and the updated swf won't be loaded. If I use it as an external, the page won't even start up (the browser says it's loaded, but it's blank, not even the loading progess bar of flex appear) <local:MainPage includeIn="default" currentState="{MainPageState}" id="Page" width="100%" height="100%" /> this is the code on the main page, if I comment this out, than the whole thing loads, even with the use of the "external" link-type. If it helps, in the design view, I see the component, but I get a warning for the library: Design mode could not load MainPage.swc. It may be incompatible with this SDK, or invalid. (DesignAssetLoader.CompleteTimeout)

    Read the article

< Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >