Search Results

Search found 29235 results on 1170 pages for 'dynamic management objects'.

Page 699/1170 | < Previous Page | 695 696 697 698 699 700 701 702 703 704 705 706  | Next Page >

  • Is marshaling/ serialization in PHP as simple as serialize($var) ?

    - by Ygam
    here's a definition of marshaling from Wikipedia: In computer science, marshalling (similar to serialization) is the process of transforming the memory representation of an object to a data format suitable for storage or transmission. It is typically used when data must be moved between different parts of a computer program or from one program to another. I have always done data serialization in php via its serialize function, usually on objects or arrays. But how is wikipedia's definition of marshaling/serialization takes place in this serizalize() function?

    Read the article

  • javascript conflict on accesing DOM

    - by justjoe
    i read this statement from a book i read The Document Object Model or DOM is really not a part of JavaScript but a separate entity existing outside it. Although you can use JavaScript to manipulate DOM objects, other scripting languages may equally well access them too. what is the best way to avoid conflict between javascript and other client-siede scripting language when we have to deal with XMLHTTPRequest object

    Read the article

  • Routing Essentials

    - by zharvey
    I'm a programmer trying to fill a big hole in my understanding of networking basics. I've been reading a good book (Networking Bible by Sosinki) but I have been finding that there is a lot of "assumed" information contained, where terms/concepts are thrown at the reader without a proper introduction to them. I understand that a "route" is a path through a network. But I am struggling with visualizing some routing-based concepts. Namely: How do routes actually manifest themselves in the hardware? Are they just a list of IP addresses that get computed at the network layer, and then executed by the transport? What kind of data exists in a so-caleld routing table? Is a routing-table just the mechanism for holding these lists of IP address (read above)? What are the performance pros/cons for having a static route, as opposed to a dynamic route?

    Read the article

  • XNA Level config file in C#

    - by Midday
    I'm working on as small game for class and was wondering what is a easy way to handel level configuration files. Like object placements , names, etc. I'm new to C# but fluent in Java, Ruby. so XML? YML? text, serialized objects?

    Read the article

  • bash grep finding java declarations

    - by Amarsh
    i have a huge .java file and i want to find all declared objects given the className. i think the declaration will always have the following signature: className objName; or className objName = or className objName= can someone suggest me a grep pattern which will find these signatures. I have the following (incomplete) : cat $rootFile | grep "$className "

    Read the article

  • Maximum float value in php

    - by Alex Deem
    Is there a way to programmatically retrieve the maximum float value for php. Akin to FLT_MAX or std::numeric_limits< float >::max() in C / C++? I am using something like the following: $minimumCost = MAXIMUM_FLOAT_VALUE??; foreach ( $objects as $object ) { $cost = $object->CalculateCost(); if ( $cost < $minimumCost ) { $minimumCost = $cost; } } (using php 5.2)

    Read the article

  • MS SQL: Primary file group is full

    - by aximili
    I have a very large table in my database and I am starting to get this error Could not allocate a new page for database 'mydatabase' because of insufficient disk space in filegroup 'PRIMARY'. Create the necessary space by dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup. How do you fix this error? I don't understand the suggestions there.

    Read the article

  • In Nginx, can I handle both a location:url or a content-type: text/html response from memcached?

    - by Sean Foo
    I'm setting up an nginx - apache reverse proxy where nginx handles the static files and apache the dynamic. I have a search engine and depending on search parameter I either directly forward the user to the page they are looking for or provide a set of search results. I cache these results in memcached as key:/search.cgi?q=foo value: LOCATION:http://www.example.com/foo.html and key:/search.cgi?q=bar value: CONTENT-TYPE: text/html <html> .... .... </html> I can pull the "Content-type...." values out of memcached using nginx and send them to the user, but I can't quite figure out how to handle a returned value like "Location..." Can I?

    Read the article

  • How can I use linq to initialize an array of repeated elements?

    - by Eric
    At present, I'm using something like this to build a list of 10 objects: myList = (from _ in Enumerable.Range(0, 9) select new MyObject {...}).toList() This is based off my python background, where I'd write: myList = [MyObject(...) for _ in range(10)] Note that I want my list to contain 10 instances of my object, not the same instance 10 times. Is this still a sensible way to do things in C#? Is there a cost to doing it this way over a simple for loop?

    Read the article

  • sql 2008 express connection problems

    - by user163457
    Hi, I've just installed a fresh copy of SQL 2008 Express. before I did anything I opened Management Studio and successfully connected using Window Authentication. However I tried to run the following on the command line "telnet localhost 1433" and got the error "Could not open connection to the host, on port 1433: Connect failed" I checked netstat and there is nothing listening on port 1433. Before I go any further, is there a problem with the install? thanks, Shane

    Read the article

  • DirectX Resource Leak

    - by srand
    At the end of my DirectX application I get "The Direct3D device has a non-zero reference count, meaning some objects were not released.". The application is large and not written by me, how can I go about debugging what resources are not being released?

    Read the article

  • many-to-many performance concerns with fluent nhibernate.

    - by Ciel
    I have a situation where I have several many-to-many associations. In the upwards of 12 to 15. Reading around I've seen that it's generally believed that many-to-many associations are not 'typical', yet they are the only way I have been able to create the associations appropriate for my case, so I'm not sure how to optimize any further. Here is my basic scenario. class Page { IList<Tag> Tags { get; set; } IList<Modification> Modifications { get; set; } IList<Aspect> Aspects { get; set; } } This is one of my 'core' classes, and coincidentally one of my core tables. Virtually half of the objects in my code can have an IList<Page>, and some of them have IList<T> where T has its own IList<Page>. As you can see, from an object oriented standpoint, this is not really a problem. But from a database standpoint this begins to introduce a lot of junction tables. So far it has worked fine for me, but I am wondering if anyone has any ideas on how I could improve on this structure. I've spent a long time thinking and in order to achieve the appropriate level of association required, I cannot think of any way to improve it. The only thing I have come up with is to make intermediate classes for each object that has an IList<Page>, but that doesn't really do anything that the HasManyToMany does not already do except introduce another class. It does not extend the functionality and, from what I can tell, it does not improve performance. Any thoughts? I am also concerned about Primary Key limits in this scenario. Most everything needs to be able to have these properties, but the Pages cannot be unique to each object, because they are going to be frequently shared and joined between multiple objects. All relationships are one-sided. (That is, a Page has no knowledge of what owns it). Because of this, I also have no Inverse() mapped HasManyToMany collections. Also, I have read the similar question : Usage of ORMs like NHibernate when there are many associations - performance concerns But it really did not answer my concerns.

    Read the article

  • Why don't I just build the whole web app in Javascript and Javascript HTML Templates?

    - by viatropos
    I'm getting to the point on an app where I need to start caching things, and it got me thinking... In some parts of the app, I render table rows (jqGrid, slickgrid, etc.) or fancy div rows (like in the New Twitter) by grabbing pure JSON and running it through something like Mustache, jquery.tmpl, etc. In other parts of the app, I just render the info in pure HTML (server-side HAML templates), and if there's searching/paginating, I just go to a new URL and load a new HTML page. Now the problem is in caching and maintainability. On one hand I'm thinking, if everything was built using Javascript HTML Templates, then my app would serve just an HTML layout/shell, and a bunch of JSON. If you look at the Facebook and Twitter HTML source, that's basically what they're doing (95% json/javascript, 5% html). This would make it so my app only needed to cache JSON (pages, actions, and/or records). Which means you'd hit the cache no matter if you were some remote api developer accessing a JSON api, or the strait web app. That is, I don't need 2 caches, one for the JSON, one for the HTML. That seems like it'd cut my cache store down in half, and streamline things a little bit. On the other hand, I'm thinking, from what I've seen/experienced, generating static HTML server-side, and caching that, seems to be much better performance wise cross-browser; you get the graphics instantly and don't have to wait that split-second for javascript to render it. StackOverflow seems to do everything in plain HTML, and you can tell... everything appears at once. Notice how though on twitter.com, the page is blank for .5-1 seconds, and the page chunks in: the javascript has to render the json. The downside with this is that, for anything dynamic (like endless scrolling, or grids), I'd have to create javascript templates anyway... so now I have server-side HAML templates, client-side javascript templates, and a lot more to cache. My question is, is there any consensus on how to approach this? What are the benefits and drawbacks from your experience of mixing the two versus going 100% with one over the other? Update: Some reasons that factor into why I haven't yet made the decision to go with 100% javascript templating are: Performance. Haven't formally tested, but from what I've seen, raw html renders faster and more fluidly than javascript-generated html cross-browser. Plus, I'm not sure how mobile devices handle dynamic html performance-wise. Testing. I have a lot of integration tests that work well with static HTML, so switching to javascript-only would require 1) more focused pure-javascript testing (jasmine), and 2) integrating javascript into capybara integration tests. This is just a matter of time and work, but it's probably significant. Maintenance. Getting rid of HAML. I love HAML, it's so easy to write, it prints pretty HTML... It makes code clean, it makes maintenance easy. Going with javascript, there's nothing as concise. SEO. I know google handles the ajax /#!/path, but haven't grasped how this will affect other search engines and how older browsers handle it. Seems like it'd require a significant setup.

    Read the article

  • Learning Excel with a book

    - by Teifion
    I work at a company where all the TMs and assistant TMs use Excel for number management, we're not able to use anything else because the computers are locked down (for good reason though I'll be asking IT anyway). The company also has a buy-a-book scheme, can anybody recommend a good book for using Excel and Macros in it?

    Read the article

  • What features would you like to see removed from C++?

    - by Justin Ethier
    This question was inspired by what-features-would-you-like-to-see-added-to-c. anBasically, C++ is a great general-purpose language. But perhaps too general and feature-rich... multiple inheritance, operator overloading, manual memory management, templates, smart pointers, virtual destructors, legacy frameworks (think MFC), and I could just go on. Is there any one feature / aspect of C++ that you would like taken away, to make our lives easier as C++ developers? One feature per answer, please.

    Read the article

  • Spring annotation mvc - request and response

    - by Eqbal
    I am using annotation based mvc and I am trying to get access to request and response objects using this method declaration in my controller. @RequestMapping(method=RequestMethod.GET) public String checkRequest(Model model, HttpServletRequest request, HttpServletResponse response) But I get an error saying GET method not supported. I need the request and response to pass it to another API call.

    Read the article

  • BlackBerry - RadioButtonField, hide border on select

    - by Jessica
    I have a screen with two RadioButtonField objects. By default, the first RadioButtonField shows a rectangle around it to show its selected, and the rectangle moves if you change the selection to the other RadioButtonField or other buttons and textboxes on the page. What I would like to know is...is there a way to hide this border that shows the selection/border?

    Read the article

  • Is Software Raid1 Using mdadm with a Local Hard Disk and GNDB Possible?

    - by Travis
    I have multiple webservers which use many small files to created dynamic web pages. Caching the web pages isn't an option. The webserver also performs writes so I need a synchronous filesystem. I'm looking to maximise performance as it's my understanding that small files is the weakness (to varying degreess) of a cluster filesystem over ethernet. Currently I'm using Centos 5.5, 64 bit. Since it's only about 300MB of data, I'm looking at mdadm using RAID-1 with the GNBD and a local hard disk using the "--write-mostly" option so the reads are done using the local hard disk. Is this possible? If so, is there any advantage to making it a tmpfs disk instead of a local hard disk? Or will the files on the local hard disk just get cached in RAM anyway so I won't see a performance gain by using tmpfs, assuming there's enough RAM available?

    Read the article

  • jQuery.extend() not giving deep copy of object formed by constructor

    - by two7s_clash
    I'm trying to use this to clone a complicated Object. The object in question has a property that is an array of other Objects, and each of these have properties of different types, mostly primitives, but a couple further Objects and Arrays. For example, an ellipsed version of what I am trying to clone: var asset = new Assets(); function Assets() { this.values = []; this.sectionObj = Section; this.names = getNames; this.titles = getTitles; this.properties = getProperties; ... this.add = addAsset; function AssetObj(assetValues) { this.name = ""; this.title = ""; this.interface = ""; ... this.protected = false; this.standaloneProtected = true; ... this.chaptersFree = []; this.chaptersUnavailable = []; ... this.mediaOptions = { videoWidth: "", videoHeight: "", downloadMedia: true, downloadMediaExt: "zip" ... } this.chaptersAvailable = []; if (typeof assetValues == "undefined") { return; } for (var name in assetValues) { if (typeof assetValues[name] == "undefined") { this[name] = ""; } else { this[name] = assetValues[name]; } } ... function Asset() { return new AssetObj(); } ... function getProperties() { var propertiesArray = new Array(); for (var property in this.values[0]) { propertiesArray.push(property); } return propertiesArray; } ... function addAsset(assetValues) { var newValues; newValues = new AssetObj(assetValues); this.values.push(newValues); } } When I do var copiedAssets = $.extend(true, {}, assets); copiedAssets.values == [], while assets.values == [Object { name="section_intro", more...}, Object { name="select_textbook", more...}, Object { name="quiz", more...}, 11 more...] When I do var copiedAssets = $.extend( {}, assets); all copiedAssets.values.[X].properties are just pointers to the value in assets. What I want is a true deep copy all the way down. What am I missing? Do I need to write a custom extend function? If so, any recommended patterns?

    Read the article

  • Am I just not understanding TDD unit testing (Asp.Net MVC project)?

    - by KallDrexx
    I am trying to figure out how to correctly and efficiently unit test my Asp.net MVC project. When I started on this project I bought the Pro ASP.Net MVC, and with that book I learned about TDD and unit testing. After seeing the examples, and the fact that I work as a software engineer in QA in my current company, I was amazed at how awesome TDD seemed to be. So I started working on my project and went gun-ho writing unit tests for my database layer, business layer, and controllers. Everything got a unit test prior to implementation. At first I thought it was awesome, but then things started to go downhill. Here are the issues I started encountering: I ended up writing application code in order to make it possible for unit tests to be performed. I don't mean this in a good way as in my code was broken and I had to fix it so the unit test pass. I mean that abstracting out the database to a mock database is impossible due to the use of linq for data retrieval (using the generic repository pattern). The reason is that with linq-sql or linq-entities you can do joins just by doing: var objs = select p from _container.Projects select p.Objects; However, if you mock the database layer out, in order to have that linq pass the unit test you must change the linq to be var objs = select p from _container.Projects join o in _container.Objects on o.ProjectId equals p.Id select o; Not only does this mean you are changing your application logic just so you can unit test it, but you are making your code less efficient for the sole purpose of testability, and getting rid of a lot of advantages using an ORM has in the first place. Furthermore, since a lot of the IDs for my models are database generated, I proved to have to write additional code to handle the non-database tests since IDs were never generated and I had to still handle those cases for the unit tests to pass, yet they would never occur in real scenarios. Thus I ended up throwing out my database unit testing. Writing unit tests for controllers was easy as long as I was returning views. However, the major part of my application (and the one that would benefit most from unit testing) is a complicated ajax web application. For various reasons I decided to change the app from returning views to returning JSON with the data I needed. After this occurred my unit tests became extremely painful to write, as I have not found any good way to write unit tests for non-trivial json. After pounding my head and wasting a ton of time trying to find a good way to unit test the JSON, I gave up and deleted all of my controller unit tests (all controller actions are focused on this part of the app so far). So finally I was left with testing the Service layer (BLL). Right now I am using EF4, however I had this issue with linq-sql as well. I chose to do the EF4 model-first approach because to me, it makes sense to do it that way (define my business objects and let the framework figure out how to translate it into the sql backend). This was fine at the beginning but now it is becoming cumbersome due to relationships. For example say I have Project, User, and Object entities. One Object must be associated to a project, and a project must be associated to a user. This is not only a database specific rule, these are my business rules as well. However, say I want to do a unit test that I am able to save an object (for a simple example). I now have to do the following code just to make sure the save worked: User usr = new User { Name = "Me" }; _userService.SaveUser(usr); Project prj = new Project { Name = "Test Project", Owner = usr }; _projectService.SaveProject(prj); Object obj = new Object { Name = "Test Object" }; _objectService.SaveObject(obj); // Perform verifications There are many issues with having to do all this just to perform one unit test. There are several issues with this. For starters, if I add a new dependency, such as all projects must belong to a category, I must go into EVERY single unit test that references a project, add code to save the category then add code to add the category to the project. This can be a HUGE effort down the road for a very simple business logic change, and yet almost none of the unit tests I will be modifying for this requirement are actually meant to test that feature/requirement. If I then add verifications to my SaveProject method, so that projects cannot be saved unless they have a name with at least 5 characters, I then have to go through every Object and Project unit test to make sure that the new requirement doesn't make any unrelated unit tests fail. If there is an issue in the UserService.SaveUser() method it will cause all project, and object unit tests to fail and it the cause won't be immediately noticeable without having to dig through the exceptions. Thus I have removed all service layer unit tests from my project. I could go on and on, but so far I have not seen any way for unit testing to actually help me and not get in my way. I can see specific cases where I can, and probably will, implement unit tests, such as making sure my data verification methods work correctly, but those cases are few and far between. Some of my issues can probably be mitigated but not without adding extra layers to my application, and thus making more points of failure just so I can unit test. Thus I have no unit tests left in my code. Luckily I heavily use source control so I can get them back if I need but I just don't see the point. Everywhere on the internet I see people talking about how great TDD unit tests are, and I'm not just talking about the fanatical people. The few people who dismiss TDD/Unit tests give bad arguments claiming they are more efficient debugging by hand through the IDE, or that their coding skills are amazing that they don't need it. I recognize that both of those arguments are utter bullocks, especially for a project that needs to be maintainable by multiple developers, but any valid rebuttals to TDD seem to be few and far between. So the point of this post is to ask, am I just not understanding how to use TDD and automatic unit tests?

    Read the article

  • What's the case when using software licensed under GPL or LGPL

    - by Johnas
    With everything legal and in line with the ethical questions in software development, is it allowed to use an open source product in my software that I charge a fee for when selling? Scenario: I've developed an PHP Content Management System (CMS) and use some Linux executables licensed under GPL or LGPL in my CMS to accomplish various tasks like image editing. I'm selling the CMS and also including the executables when I deliver the product. I do not edit the source code of the GPL software, just using it.

    Read the article

  • Installing Wordpress - constant PHP/MySQL extension appears missing

    - by Driss Zouak
    I've got Win2003 w/IIS6, PHP 5 and MySQL installed. I can confirm PHP is installed correctly because I have a testMe.php that runs properly. When I run the Wordpress setup, I get informed that Your PHP installation appears to be missing the MySQL extension which is required by WordPress. But in my PHP.ini in the DYNAMIC EXTENSIONS section I have extension=php_mysql.dll extension=php_mysqli.dll I verified that mysql.dll and libmysql.dll are both in my PHP directory. I copied my libmysql.dll to the C:\Windows\System32 directory. When I try to run the initial setup for WordPress, I get this answer. I've Googled setting this up, and everything comes down to the above. I'm missing something, but none of the instructions that I've found online seem to cover whatever that is.

    Read the article

< Previous Page | 695 696 697 698 699 700 701 702 703 704 705 706  | Next Page >