Search Results

Search found 67262 results on 2691 pages for 'data driven testing'.

Page 216/2691 | < Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >

  • N-tier architecture and unit tests (using Java)

    - by Alexandre FILLATRE
    Hi there, I'd like to have your expert explanations about an architectural question. Imagine a Spring MVC webapp, with validation API (JSR 303). So for a request, I have a controller that handles the request, then passes it to the service layer, which passes to the DAO one. Here's my question. At which layer should the validation occur, and how ? My though is that the controller has to handle basic validation (are mandatory fields empty ? Is the field length ok ? etc.). Then the service layer can do some tricker stuff, that involve other objets. The DAO does no validation at all. BUT, if I want to implement some unit testing (i.e. test layers below service, not the controllers), I'll end up with unexpected behavior because some validations should have been done in the Controller layer. As we don't use it for unit testing, there is a problem. What is the best way to deal with this ? I know there is no universal answer, but your personal experience is very welcomed. Thanks a lot. Regards.

    Read the article

  • Django formset unit test

    - by Py
    I can't running Unit Test with formset. I try to do a test: class NewClientTestCase(TestCase): def setUp(self): self.c = Client() def test_0_create_individual_with_same_adress(self): post_data = { 'ctype': User.CONTACT_INDIVIDUAL, 'username': 'dupond.f', 'email': '[email protected]', 'password': 'pwd', 'password2': 'pwd', 'civility': User.CIVILITY_MISTER, 'first_name': 'François', 'last_name': 'DUPOND', 'phone': '+33 1 34 12 52 30', 'gsm': '+33 6 34 12 52 30', 'fax': '+33 1 34 12 52 30', 'form-0-address1': '33 avenue Gambetta', 'form-0-address2': 'apt 50', 'form-0-zip_code': '75020', 'form-0-city': 'Paris', 'form-0-country': 'FRA', 'same_for_billing': True, } response = self.c.post(reverse('client:full_account'), post_data, follow=True) self.assertRedirects(response, '%s?created=1' % reverse('client:dashboard')) and i have this error: ValidationError: [u'ManagementForm data is missing or has been tampered with'] My view : def full_account(request, url_redirect=''): from forms import NewUserFullForm, AddressForm, BaseArticleFormSet fields_required = [] fields_notrequired = [] AddressFormSet = formset_factory(AddressForm, extra=2, formset=BaseArticleFormSet) if request.method == 'POST': form = NewUserFullForm(request.POST) objforms = AddressFormSet(request.POST) if objforms.is_valid() and form.is_valid(): user = form.save() address = objforms.forms[0].save() if url_redirect=='': url_redirect = '%s?created=1' % reverse('client:dashboard') logon(request, form.instance) return HttpResponseRedirect(url_redirect) else: form = NewUserFullForm() objforms = AddressFormSet() return direct_to_template(request, 'clients/full_account.html', { 'form':form, 'formset': objforms, 'tld_fr':False, }) and my form file : class BaseArticleFormSet(BaseFormSet): def clean(self): msg_err = _('Ce champ est obligatoire.') non_errors = True if 'same_for_billing' in self.data and self.data['same_for_billing'] == 'on': same_for_billing = True else: same_for_billing = False for i in [0, 1]: form = self.forms[i] for field in form.fields: name_field = 'form-%d-%s' % (i, field ) value_field = self.data[name_field].strip() if i == 0 and self.forms[0].fields[field].required and value_field =='': form.errors[field] = msg_err non_errors = False elif i == 1 and not same_for_billing and self.forms[1].fields[field].required and value_field =='': form.errors[field] = msg_err non_errors = False return non_errors class AddressForm(forms.ModelForm): class Meta: model = Address address1 = forms.CharField() address2 = forms.CharField(required=False) zip_code = forms.CharField() city = forms.CharField() country = forms.ChoiceField(choices=CountryField.COUNTRIES, initial='FRA')

    Read the article

  • Multiple calls to data service from SL3?

    - by Chris
    I have an SL3 that makes asynchronous calls to a data service. Basically, there is a treeview that is bound to a collection of objects. The idea is that as a user selects a specific treeviewitem, a call is made to the data service, with a parameter specific to the selected treeviewitem being passed to the corresponding web method in the data service. The data service returns data back to the SL3 client, and the client presents the data to the user. This works well. The problem is that when users start to navigate through the treeview using the arrow keys on their keyboard, they could press the down arrow key, for example, 10 times, and 10 calls will be made to the data service, and then each of the 10 items will be displayed to the user momentarily, until finishing with the data for the most recently selected treeview item. So - onto the question. How can I put in some form of delay, to allow someone to navigate quickly through a treeview, then, once then stop at a certain treeviewitem, a call is made to the data service? Thanks for any suggestions. Chris

    Read the article

  • Best way to test a Delphi application

    - by Osama ALASSIRY
    I have a Delphi application that has many dependencies, and it would be difficult to refactor it to use DUnit (it's huge), so I was thinking about using something like AutomatedQA's TestComplete to do the testing from the front-end UI. My main problem is that a bugfix or new feature sometimes breaks old code that was previously tested (manually), and used to work. I have setup the application to use command-line switches to open-up a specific form that could be tested, and I can create a set of values and clicks needed to be done. But I have a few questions before I do anything drastic... (and before purchasing anything) Is it worth it? Would this be a good way to test? The result of the test should in my database (Oracle), is there an easy way in testcomplete to check these values (multiple fields in multiple tables)? I would need to setup a test database to do all the automated testing, would there be an easy way to automate re-setting the test db? Other than drop user cascade, create user,..., impdp. Is there a way in testcomplete to specify command-line parameters for an exe? Does anybody have any similar experiences.

    Read the article

  • how often should the entire suite of a system's unit tests be run?

    - by gerryLowry
    Generally, I'm still very much a unit testing neophyte. BTW, you may also see this question on other forums like xUnit.net, et cetera, because it's an important question to me. I apoligize in advance for my cross posting; your opinions are very important to me and not everyone in this forum belongs to the other forums too. I was looking at a large decade old legacy system which has had over 700 unit tests written recently (700 is just a small beginning). The tests happen to be written in MSTest but this question applies to all testing frameworks AFAIK. When I ran, via vs2008 "ALL TESTS", the final count was only seven tests. That's about 1% of the total tests that have been written to date. MORE INFORMATION: The ASP.NET MVC 2 RTM source code, including its unit tests, is available on CodePlex; those unit tests are also written in MSTest even though (an irrelevant fact) Brad Wilson later joined the ASP.NET MVC team as its Senior Programmer. All 2000 plus tests get run, not just a few. QUESTION: given that AFAIK the purpose of unit tests is to identify breakages in the SUT, am I correct in thinking that the "best practice" is to always, or at least very frequently, run all of the tests? Thank you. Regards, Gerry (Lowry)

    Read the article

  • Getting zeros between data while reading a binary file in C

    - by indiajoe
    I have a binary data which I am reading into an array of long integers using a C programme. hexdump of the binary data shows, that after first few data points , it starts again at a location 20000 hexa adresses away. hexdump output is as shown below. 0000000 0000 0000 0000 0000 0000 0000 0000 0000 * 0020000 0000 0000 0053 0000 0064 0000 006b 0000 0020010 0066 0000 0068 0000 0066 0000 005d 0000 0020020 0087 0000 0059 0000 0062 0000 0066 0000 ........ and so on... But when I read it into an array 'data' of long integers. by the typical fread command fread(data,sizeof(*data),filelength/sizeof(*data),fd); It is filling up with all zeros in my data array till it reaches the 20000 location. After that it reads in data correctly. Why is it reading regions where my file is not there? Or how will I make it read only my file, not anything inbetween which are not in file? I know it looks like a trivial problem, but I cannot figure it out even after googling one night.. Can anyone suggest me where I am doing it wrong? Other Info : I am working on a gnu/linux machine. (slax-atma distro to be specific) My C compiler is gcc.

    Read the article

  • memcache is not storing data accross requests

    - by morpheous
    I am new to using memcache, so I may be doing something wrong. I have written a wrapper class around memcache. The wrapper class has only static methods, so is a quasi singleton. The class looks something like this: class myCache { private static $memcache = null; private static $initialized = false; public static function init() { if (self::$initialized) return; self::$memcache = new Memcache(); if (self::configure()) //connects to daemon { self::store('foo', 'bar'); } else throw ConnectionError('I barfed'); } public static function store($key, $data, $flag=MEMCACHE_COMPRESSED, $timeout=86400) { if (self::$memcache->get($key)!== false) return self::$memcache->replace($key, $data, $flag, $timeout); return self::$memcache->set($key, $data, $flag, $timeout); } public static function fetch($key) { return self::$memcache->get($key); } } //in my index.php file, I use the class like this require_once('myCache.php'); myCache::init(); echo 'Stored value is: '. myCache::fetch('foo'); The problem is that the myCache::init() method is being executed in full everytime a page is requested. I then remembered that static variables do not maintain state accross page requests. So I decided instead, to store the flag that indicates whether the server contains the start up data (for our purposes, the variable 'foo', with value 'bar') in memcache itself. Once the status flag is stored in memcache itself, It solves the problem of the initialisation data being loaded for every page request (which quite frankly, defeats the purpose of memcache). However, having solved that problem, when I come to fetch the data in memcache, it is empty. I dont understand whats going on. Can anyone clarify how I can store my data once and retrieve it accross page requests? BTW, (just to clarify), the get/set is working correctly, and if I allow memcache to load the initialisation data for each page request, (which is silly), then the data is available in memcache.

    Read the article

  • Cross-domain data access in JavaScript

    - by vit
    We have an ASP.Net application hosted on our network and exposed to a specific client. This client wants to be able to import data from their own server into our application. The data is retrieved with an HTTP request and is CSV formatted. The problem is that they do not want to expose their server to our network and are requesting the import to be done on the client side (all clients are from the same network as their server). So, what needs to be done is: They request an import page from our server The client script on the page issues a request to their server to get CSV formatted data The data is sent back to our application This is not a challenge when both servers are on the same domain: a simple hidden iframe or something similar will do the trick, but here what I'm getting is a cross-domain "access denied" error. They also refuse to change the data format to return JSON or XML formatted data. What I tried and learned so far is: Hidden iframe -- "access denied" XMLHttpRequest -- behaviour depends on the browser security settings: may work, may work while nagging a user with security warnings, or may not work at all Dynamic script tags -- would have worked if they could have returned data in JSON format IE client data binding -- the same "access denied" error Is there anything else I can try before giving up and saying that it will not be possible without exposing their server to our application, changing their data format or changing their browser security settings? (DNS trick is not an option, by the way).

    Read the article

  • AJAX Post Not Sending Data?

    - by Jascha
    I can't for the life of me figure out why this is happening. This is kind of a repost, so forgive me, but I have new data. I am running a javascript log out function called logOut() that has make a jQuery ajax call to a php script... function logOut(){ var data = new Object; data.log_out = true; $.ajax({ type: 'POST', url: 'http://www.mydomain.com/functions.php', data: data, success: function() { alert('done'); } }); } the php function it calls is here: if(isset($_POST['log_out'])){ $query = "INSERT INTO `token_manager` (`ip_address`) VALUES('logOutSuccess')"; $connection->runQuery($query); // <-- my own database class... // omitted code that clears session etc... die(); } Now, 18 hours out of the day this works, but for some reason, every once in a while, the POST data will not trigger my query. (this will last about an hour or so). I figured out the post data is not being set by adding this at the end of my script... $query = "INSERT INTO `token_manager` (`ip_address`) VALUES('POST FAIL')"; $connection->runQuery($query); So, now I know for certain my log out function is being skipped because in my database is the following data: if it were NOT being skipped, my data would show up like this: I know it is being skipped for two reasons, one the die() at the end of my first function, and two, if it were a success a "logOutSuccess" would be registered in the table. Any thoughts? One friend says it's a janky hosting company (hostgator.com). I personally like them because they are cheap and I'm a fan of cpanel. But, if that's the case??? Thanks in advance. -J

    Read the article

  • Exporting de-aggregated data

    - by Ben
    I'm currently working on a data export feature for a survey application. We are using SQL2k8. We store data in a normalized format: QuestionId, RespondentId, Answer. We have a couple other tables that define what the question text is for the QuestionId and demographics for the RespondentId... Currently I'm using some dynamic SQL to generate a pivot that joins the question table to the answer table and creates an export, its working... The problem is that it seems slow and we don't have that much data (less than 50k respondents). Right now I'm thinking "why am I 'paying' to de-aggregate the data for each query? Why don't I cache that?" The data being exported is based on dynamic criteria. It could be "give me respondents that completed on x date (or range)" or "people that like blue", etc. Because of that, I think I have to cache at the respondent level, find out what respondents are being exported and then select their combined cached de-aggregated data. To me the quick and dirty fix is a totally flat table, RespondentId, Question1, Question2, etc. The problem is, we have multiple clients and that doesn't scale AND I don't want to have to maintain the flattened table as the survey changes. So I'm thinking about putting an XML column on the respondent table and caching the results of a SELECT * FROM Data FOR XML AUTO WHERE RespondentId = x. With that in place, I would then be able to get my export with filtering and XML calls into the XML column. What are you doing to export aggregated data in a flattened format (CSV, Excel, etc)? Does this approach seem ok? I worry about the cost of XML functions on larger result sets (think SELECT RespondentId, XmlCol.value('//data/question_1', 'nvarchar(50)') AS [Why is there air?], XmlCol.RinseAndRepeat)... Is there a better technology/approach for this? Thanks!

    Read the article

  • Prove correctness of unit test

    - by Timo Willemsen
    I'm creating a graph framework for learning purposes. I'm using a TDD approach, so I'm writing a lot of unit tests. However, I'm still figuring out how to prove the correctness of my unit tests For example, I have this class (not including the implementation, and I have simplified it) public class SimpleGraph(){ //Returns true on success public boolean addEdge(Vertex v1, Vertex v2) { ... } //Returns true on sucess public boolean addVertex(Vertex v1) { ... } } I also have created this unit tests @Test public void SimpleGraph_addVertex_noSelfLoopsAllowed(){ SimpleGraph g = new SimpleGraph(); Vertex v1 = new Vertex('Vertex 1'); actual = g.addVertex(v1); boolean expected = false; boolean actual = g.addEdge(v1,v1); Assert.assertEquals(expected,actual); } Okay, awesome it works. There is only one crux here, I have proved that the functions work for this case only. However, in my graph theory courses, all I'm doing is proving theorems mathematically (induction, contradiction etc. etc.). So I was wondering is there a way I can prove my unit tests mathematically for correctness? So is there a good practice for this. So we're testing the unit for correctness, instead of testing it for one certain outcome.

    Read the article

  • How to map a test onto a list of numbers

    - by Arthur Ulfeldt
    I have a function with a bug: user> (-> 42 int-to-bytes bytes-to-int) 42 user> (-> 128 int-to-bytes bytes-to-int) -128 user> looks like I need to handle overflow when converting back... Better write a test to make sure this never happens again. This project is using clojure.contrib.test-is so i write: (deftest int-to-bytes-to-int (let [lots-of-big-numbers (big-test-numbers)] (map #(is (= (-> % int-to-bytes bytes-to-int) %)) lots-of-big-numbers))) This should be testing converting to a seq of bytes and back again produces the origional result on a list of 10000 random numbers. Looks OK in theory? except none of the tests ever run. Testing com.cryptovide.miscTest Ran 23 tests containing 34 assertions. 0 failures, 0 errors. why don't the tests run? what can I do to make them run?

    Read the article

  • CodeIgniter and SimpleTest -- How to make my first test?

    - by Smandoli
    I'm used to web development using LAMP, PHP5, MySQL plus NetBeans with Xdebug. Now I want to improve my development, by learning how to use (A) proper testing and (B) a framework. So I have set up CodeIgniter, SimpleTest and the easy Xdebug add-in for Firefox. This is great fun because maroonbytes provided me with clear instructions and a configured setup ready for download. I am standing on the shoulders of giants, and very grateful. I've used SimpleTest a bit in the past. Here is a the kind of thing I wrote: <?php require_once('../simpletest/unit_tester.php'); require_once('../simpletest/reporter.php'); class TestOfMysqlTransaction extends UnitTestCase { function testDB_ViewTable() { $this->assertEqual(1,1); // a pseudo-test } } $test = new TestOfMysqlTransaction(); $test->run(new HtmlReporter()) ?> So I hope I know what a test looks like. What I can't figure out is where and how to put a test in my new setup. I don't see any sample tests in the maroonbytes package, and Google so far has led me to posts that assume unit testing is already functionally available. What do I do?

    Read the article

  • Best Practice: Protecting Personally Identifiable Data in a ASP.NET / SQL Server 2008 Environment

    - by William
    Thanks to a SQL injection vulnerability found last week, some of my recommendations are being investigated at work. We recently re-did an application which stores personally identifiable information whose disclosure could lead to identity theft. While we read some of the data on a regular basis, the restricted data we only need a couple of times a year and then only two employees need it. I've read up on SQL Server 2008's encryption function, but I'm not convinced that's the route I want to go. My problem ultimately boils down to the fact that we're either using symmetric keys or assymetric keys encrypted by a symmetric key. Thus it seems like a SQL injection attack could lead to a data leak. I realize permissions should prevent that, permissions should also prevent the leaking in the first place. It seems to me the better method would be to asymmetrically encrypt the data in the web application. Then store the private key offline and have a fat client that they can run the few times a year they need to access the restricted data so the data could be decrypted on the client. This way, if the server get compromised, we don't leak old data although depending on what they do we may leak future data. I think the big disadvantage is this would require re-writing the web application and creating a new fat application (to pull the restricted data). Due to the recent problem, I can probably get the time allocated, so now would be the proper time to make the recommendation. Do you have a better suggestion? Which method would you recommend? More importantly why?

    Read the article

  • Load some data from database and hide it somewhere in a web page

    - by kwokwai
    Hi all, I am trying to load some data (which may be up to a few thousands words) from the database, and store the data somewhere in a html web page for comparing the data input by users. I am thinking to load the data to a Textarea under Div tag and hide the the data: <Div id="reference" style="Display:none;"> <textarea rows="2" cols="20" id="database"> html, htm, php, asp, jsp, aspx, ctp, thtml, xml, xsl... </textarea> </Div> <table border=0 width="100%"> <tr> <td>Username</td> <td> <div id="username"> <input type="text" name="data" id="data"> </div> </td> </tr> </table> <script> $(document).ready(function(){ //comparing the data loaded from database with the user's input if($("#data").val()==$("#database").val()) {alert("error");} }); </script> I am not sure if this is the best way to do it, so could you give me some advice and suggest your methods please.

    Read the article

  • How to deal with a flaw in System.Data.DataTableExtensions.CopyToDataTable()

    - by andy
    Hey guys, so I've come across something which is perhaps a flaw in the Extension method .CopyToDataTable. This method is used by Importing (in VB.NET) System.Data.DataTableExtensions and then calling the method against an IEnumerable. You would do this if you want to filter a Datatable using LINQ, and then restore the DataTable at the end. i.e: Imports System.Data.DataRowExtensions Imports System.Data.DataTableExtensions Public Class SomeClass Private Shared Function GetData() As DataTable Dim Data As DataTable Data = LegacyADO.NETDBCall Data = Data.AsEnumerable.Where(Function(dr) dr.Field(Of Integer)("SomeField") = 5).CopyToDataTable() Return Data End Function End Class In the example above, the "WHERE" filtering might return no results. If this happens CopyToDataTable throws an exception because there are no DataRows. Why? The correct behavior should be to return a DataTable with Rows.Count = 0. Can anyone think of a clean workaround to this, in such a way that whoever calls CopyToDataTable doesn't have to be aware of this issue? System.Data.DataTableExtensions is a Static Class so I can't override the behavior....any ideas? Have I missed something? cheers UPDATE: I have submitted this as an issue to Connect. I would still like some suggestions, but if you agree with me, you could vote up the issue at Connect via the link above cheers

    Read the article

  • IPad SQLite Push and Pull Data from external MS SQL Server DB

    - by MattyD
    This carries on from my previous post (http://stackoverflow.com/questions/4182664/ipad-app-pull-and-push-relational-data). My plan is that when the ipad application starts I am going to pull data (config data i.e. Departments, Types etc etc relational data that is used across the system) from a webhosted MS SQL Server DB via a webservice and populate it into an SQL Lite DB on the IPad. Then when I load a listing I will pull the data over the line again via a webservice and populate it into the SQL Lite db on the ipad (than just run select commands to populate the listing). My questions are: 1. What is the most efficient way to transfer data across the line via the web? Everyone seems to do it a different way. My idea is that I will have a webService for each type of data pull (e.g. RetrieveContactListing) that will query the db and than convert that data into "something" to send across the line. My question really is what is the "something" that it should be converting into? 2. Everyone talks about odata services. Is this suited for applications where complex read and writes are needed? Ive created a simple iphone app before that talked to an sql server db (i just sent my own structured xml across the line) but now with this app the data calls are going to be a lot larger so efficiency is key.

    Read the article

  • Unit test approach for generic classes/methods

    - by Greg
    Hi, What's the recommended way to cover off unit testing of generic classes/methods? For example (referring to my example code below). Would it be a case of have 2 or 3 times the tests to cover testing the methods with a few different types of TKey, TNode classes? Or is just one class enough? public class TopologyBase<TKey, TNode, TRelationship> where TNode : NodeBase<TKey>, new() where TRelationship : RelationshipBase<TKey>, new() { // Properties public Dictionary<TKey, NodeBase<TKey>> Nodes { get; private set; } public List<RelationshipBase<TKey>> Relationships { get; private set; } // Constructors protected TopologyBase() { Nodes = new Dictionary<TKey, NodeBase<TKey>>(); Relationships = new List<RelationshipBase<TKey>>(); } // Methods public TNode CreateNode(TKey key) { var node = new TNode {Key = key}; Nodes.Add(node.Key, node); return node; } public void CreateRelationship(NodeBase<TKey> parent, NodeBase<TKey> child) { . . .

    Read the article

  • Passing $_GET or $_POST data to PHP script that is run with wget

    - by Matt
    Hello, I have the following line of PHP code which works great: exec( 'wget http://www.mydomain.com/u1.php /dev/null &' ); u1.php acts to do various types of maintenance on my server and the above command makes it happen in the background. No problems there. But I need to pass variable data to u1.php before it's executed. I'd like to pass POST data preferably, but could accommodate GET or SESSION data if POST isn't an option. Basically the type of data being passed is user-specific and will vary depending on who is logged in to the site and triggering the above code. I've tried adding the GET data to the end of the URL and that didn't work. So how else might I be able to send the data to u1.php? POST data preferred, SESSION data would work as well (but I tried this and it didn't pick up the logged in user's session data). GET would be a last resort. Thanks!

    Read the article

  • How best to organize projects folders for unit tests in .NET?

    - by Dan Bailiff
    So I'm trying to introduce unit testing to my group. I've successfully upgraded a VS'05 web site project to a VS'08 web application, and now have a solution with the web app project and a unit test project. The issue now is how to fit this back into the source repository such that we don't break the build system and the unit test projects are persisted as well. Right now we have something like this: c:\root c:\root\projectA c:\root\projectB c:\root\projectC where projectA contains the sln file and all other related files/folders for the project. Now I have this new solution that looks like this: c:\root\projectA (parent folder) c:\root\projectA\projectA (the production code project) c:\root\projectA\projectA_Test (the unit test project) c:\root\projectA\TestResults c:\root\projecta\projectA.sln How do I integrate this new structure back into the code repository? I'd really prefer to keep the production code folder where it was in the source repository for the sake of the build, but is this necessary? If I keep the production code project in its usual place then where do I keep my unit test projects and how do I connect them with a sln file? Is it better to use this new structure and adjust the build process? I'd love to hear how other people are dealing with this issue of upgrading legacy projects to unit testing.

    Read the article

  • c# Unit Test: Writing to Settings in unit test does not save values in user.config

    - by HorstWalter
    I am running a c# unit test (VS 2008). Within the test I do write to the settings, which should result in saving the data to the user.config. Settings.Default.X = "History"; // X is string Settings.Default.Save(); But this simply does not create the file (I have crosschecked under "C:\Documents and Settings\HW\Local Settings\Application Data"). If I create the same stuff as a Console application, there is no problem persisting the data (same code). Is there something special I need to consider doing this in a UnitTest?

    Read the article

  • jQuery .Ajax() function selector , store specific data in variable

    - by user279321
    I am new to jQuery and I have the following problem. My project has say 2 pages, 1.JSP and 2.html. Now I want to pick selected data from 2.html and use it on 1.JSP. Now this was achieved very easily using .load but I want the data to be present in a JavaScript variable rather than put it on the page (div tags, etc.), so that I can work upon that data (modify or add to database). I tried using .ajax and was able to write the following code: var value = (function () { var val = nulll; var filename = " 2.html"; $.ajax ({ 'async': false, 'global': false, 'url': filename, 'success' : function(data) { val = data; } }) return val; })() document.write(value) Where do I put the selector format (say div.id5) so that my variable have only relevant data rather than the full file data?

    Read the article

  • adding nodes to a binary search tree randomly deletes nodes

    - by SDLFunTimes
    Hi, stack. I've got a binary tree of type TYPE (TYPE is a typedef of data*) that can add and remove elements. However for some reason certain values added will overwrite previous elements. Here's my code with examples of it inserting without overwriting elements and it not overwriting elements. the data I'm storing: struct data { int number; char *name; }; typedef struct data data; # ifndef TYPE # define TYPE data* # define TYPE_SIZE sizeof(data*) # endif The tree struct: struct Node { TYPE val; struct Node *left; struct Node *rght; }; struct BSTree { struct Node *root; int cnt; }; The comparator for the data. int compare(TYPE left, TYPE right) { int left_len; int right_len; int shortest_string; /* find longest string */ left_len = strlen(left->name); right_len = strlen(right->name); if(right_len < left_len) { shortest_string = right_len; } else { shortest_string = left_len; } /* compare strings */ if(strncmp(left->name, right->name, shortest_string) > 1) { return 1; } else if(strncmp(left->name, right->name, shortest_string) < 1) { return -1; } else { /* strings are equal */ if(left->number > right->number) { return 1; } else if(left->number < right->number) { return -1; } else { return 0; } } } And the add method struct Node* _addNode(struct Node* cur, TYPE val) { if(cur == NULL) { /* no root has been made */ cur = _createNode(val); return cur; } else { int cmp; cmp = compare(cur->val, val); if(cmp == -1) { /* go left */ if(cur->left == NULL) { printf("adding on left node val %d\n", cur->val->number); cur->left = _createNode(val); } else { return _addNode(cur->left, val); } } else if(cmp >= 0) { /* go right */ if(cur->rght == NULL) { printf("adding on right node val %d\n", cur->val->number); cur->rght = _createNode(val); } else { return _addNode(cur->rght, val); } } return cur; } } void addBSTree(struct BSTree *tree, TYPE val) { tree->root = _addNode(tree->root, val); tree->cnt++; } The function to print the tree: void printTree(struct Node *cur) { if (cur == 0) { printf("\n"); } else { printf("("); printTree(cur->left); printf(" %s, %d ", cur->val->name, cur->val->number); printTree(cur->rght); printf(")\n"); } } Here's an example of some data that will overwrite previous elements: struct BSTree myTree; struct data myData1, myData2, myData3; myData1.number = 5; myData1.name = "rooty"; myData2.number = 1; myData2.name = "lefty"; myData3.number = 10; myData3.name = "righty"; initBSTree(&myTree); addBSTree(&myTree, &myData1); addBSTree(&myTree, &myData2); addBSTree(&myTree, &myData3); printTree(myTree.root); Which will print: (( righty, 10 ) lefty, 1 ) Finally here's some test data that will go in the exact same spot as the previous data, but this time no data is overwritten: struct BSTree myTree; struct data myData1, myData2, myData3; myData1.number = 5; myData1.name = "i"; myData2.number = 5; myData2.name = "h"; myData3.number = 5; myData3.name = "j"; initBSTree(&myTree); addBSTree(&myTree, &myData1); addBSTree(&myTree, &myData2); addBSTree(&myTree, &myData3); printTree(myTree.root); Which prints: (( j, 5 ) i, 5 ( h, 5 ) ) Does anyone know what might be going wrong? Sorry if this post was kind of long.

    Read the article

  • SQL Server Compact 'Data Directory' macro in Connection String - more info needed

    - by codeulike
    So, as described on this msdn page, when you define a Connection String for SQL Server Compact 3.5, you can use the "Data Directory" macro, like this: quote from this msdn page: Data Directory Support SQL Server Compact 3.5 now supports the Data Directory macro. This means that if you add the string |DataDirectory| (enclosed in pipe symbols) to a file path, it will resolve to the path of the database. For example, consider the connection string: "Data Source= c:\program files\MyApp\Mydb.sdf" When using Data Directory, you can instead use the following connection string: "Data Source = |DataDirectory|\Mydb.sdf" For more information, see How to: Deploy a SQL Server Compact 3.5 Database with an Application. However, the 'for more information' link on msdn doesn't actually give any more information. So my question is: How does the |Data Directory| macro translate at run time? For WinForm apps, it seems to just give the location of the executable. Or is it more complicated than that?

    Read the article

  • Recording user data for heatmap with javascript

    - by Hanpan
    Hi, I was wondering how sites such as crazyegg.com store user click data during a session. Obviously there is some underlying script which is storing each clicks data, but how is that data then populated into a database? It seems to me the simple solution would be to send data via AJAX but when you consider that it's almost impossible to get a cross browser page unload function setup, I'm wondering if there is perhaps some other more advanced way of getting metric data. I even saw a site which records each mouse movement and I am guessing they are definitely not sending that data to a database on each mouse move event. So, in a nutshell, what kind of technology would I need in order to monitor user activity on my site and then store this information in order to create metric data? I am not looking to recreate GA, I'm just very interested to know how this sort of thing is done. Thanks in advance

    Read the article

< Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >