Search Results

Search found 18135 results on 726 pages for 'shared objects'.

Page 141/726 | < Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >

  • Collaborative Organizations build Organizational Culture

    “A Collaborative organization builds its culture based on the idea of the family or an athletic team.”(Hoefling, 2001) As I grew up, I participated in many different types of clubs, civic organizations, and sports teams.  Now looking back at the more successful undertakings, I can see three commonalities amongst them. They all shared a defined purpose or goal, defined functional roles, and a shared sense of responsibility to the group. Defined Purpose or Goal In order to unit people to work together, they must share a common goal or have a common purpose. An example of this would be the Lions Club International Foundation. There purpose is to help everyone to lead healthier and more productive lives, nurtures the potential of youth, promotes health, serves the elderly, empowers the disabled and helps victims of disasters. This organization holds localized meetings across the world and works in conjunction with other localized clubs within there organization along with other organizations to promote common goals. If there are no common goals for the group, then there is nothing that binds people to the group, and nothing will be done. Defined Functional Roles In order for an organization to work and function as a team, they must have defined roles and everyone must know how their roles are interdependent on each other. Lets shed light on this subject by looking at a football team’s offense.  Each player has an assigned role to play each time the ball is snapped. The offensive line blocks for the running back or quarterback, the quarterback passes the ball to the wide receiver or hands it off to the running back and the running back and wide receivers run with the ball towards the goal line. Each member of this team shares a common goal of scoring a touchdown, but if each team member does not fulfill their assigned roles the offences will collapse and the team will lose yards. This will provide a set back to the teams goal of scoring a touchdown because they potential are then farther away from the goal line.  In addition, if all the players do not know their roles and how they are part of a larger team then even larger yard losses can occur. Shared Sense of Personal Responsibility to the Group Shared responsibility comes with the shared common goals. Each person in the organization must do their part to promote the common shared goal or purpose based on their abilities. A prime example of this is a wrestling team competing in a match. Points are awarded to the team based on how many wins the team achieves in the meet and of that how many wins where won by decision or by pin. If a wrestler pins his opponent the teams will receive 2 points for the win, but if the wrestler wins by decision, then the team only gets one point for the win. So it is the responsibility of each person on the team to not get pinned if they are unable to win the match. If the team member gets pinned then the other team receives an additional point for the win. References: Hoefling, T. (2001). Working Virtually: Managing People for Successful Virtual Teams and Organizations. Sterling, VA: Stylus Publishing, LLC.

    Read the article

  • Sorting for 2D Drawing

    - by Nexian
    okie, looked through quite a few similar questions but still feel the need to ask mine specifically (I know, crazy). Anyhoo: I am drawing a game in 2D (isometric) My objects have their own arrays. (i.e. Tiles[], Objects[], Particles[], etc) I want to have a draw[] array to hold anything that will be drawn. Because it is 2D, I assume I must prioritise depth over any other sorting or things will look weird. My game is turn based so Tiles and Objects won't be changing position every frame. However, Particles probably will. So I am thinking I can populate the draw[] array (probably a vector?) with what is on-screen and have it add/remove object, tile & particle references when I pan the screen or when a tile or object is specifically moved. No idea how often I'm going to have to update for particles right now. I want to do this because my game may have many thousands of objects and I want to iterate through as few as possible when drawing. I plan to give each element a depth value to sort by. So, my questions: Does the above method sound like a good way to deal with the actual drawing? What is the most efficient way to sort a vector? Most of the time it wont require efficiency. But for panning the screen it will. And I imagine if I have many particles on screen moving across multiple tiles, it may happen quite often. For reference, my screen will be drawing about 2,800 objects at any one time. When panning, it will be adding/removing about ~200 elements every second, and each new element will need adding in the correct location based on depth.

    Read the article

  • Tell a user whether they have already viewed an item in a list. How?

    - by user2738308
    It is pretty common for a web application to display a list of items and for each item in the list to indicate to the current user whether they have already viewed the associated item. An approach that I have taken in the past is to store HasViewed objects that contain the Id of a viewed item and the Id of the User who has viewed that item. When it comes time to display a list of items this requires querying for the items, and separately querying for the HasViewed objects, and then combining the results into a set of objects constructed solely for the purpose of displaying them in the view. Each e.g li then uses the e.g. has_viewed property of the objects constructed above. I would like to know whether others take a different approach and can recommend alternative ways to achieve this functionality.

    Read the article

  • What data-structure/algorithm will allow me to send a list of key/value dictionaries using the least amount of bits?

    - by user12365
    I have server objects that have corresponding client objects. The data to be kept in sync is inside the server object's key/value dictionary. To keep the client objects in sync with the sever objects, I want the server to send the key/value dictionary every frame for each object. What data-structure/algorithm will allow me to send a list of key/value dictionaries using the least amount of bits? Bonus constraint 1: For each type of object, the values of some keys change more often than others. Bonus constraint 2: Memory usage on the server side is relatively expensive.

    Read the article

  • Multitenancy in SQL Azure

    - by cibrax
    If you are building a SaaS application in Windows Azure that relies on SQL Azure, it’s probably that you will need to support multiple tenants at database level. This is short overview of the different approaches you can use for support that scenario, A different database per tenant A new database is created and assigned when a tenant is provisioned. Pros Complete isolation between tenants. All the data for a tenant lives in a database only he can access. Cons It’s not cost effective. SQL Azure databases are not cheap, and the minimum size for a database is 1GB.  You might be paying for storage that you don’t really use. A different connection pool is required per database. Updates must be replicated across all the databases You need multiple backup strategies across all the databases Multiple schemas in a database shared by all the tenants A single database is shared among all the tenants, but every tenant is assigned to a different schema and database user. Pros You only pay for a single database. Data is isolated at database level. If the credentials for one tenant is compromised, the rest of the data for the other tenants is not. Cons You need to replicate all the database objects in every schema, so the number of objects can increase indefinitely. Updates must be replicated across all the schemas. The connection pool for the database must maintain a different connection per tenant (or set of credentials) A different user is required per tenant, which is stored at server level. You have to backup that user independently. Centralizing the database access with store procedures in a database shared by all the tenants A single database is shared among all the tenants, but nobody can read the data directly from the tables. All the data operations are performed through store procedures that centralize the access to the tenant data. The store procedures contain some logic to map the database user to an specific tenant. Pros You only pay for a single database. You only have a set of objects to maintain and backup. Cons There is no real isolation. All the data for the different tenants is shared in the same tables. You can not use traditional ORM like EF code first for consuming the data. A different user is required per tenant, which is stored at server level. You have to backup that user independently. SQL Federations A single database is shared among all the tenants, but a different federation is used per tenant. A federation in few words, it’s a mechanism for horizontal scaling in SQL Azure, which basically uses the idea of logical partitions to distribute data based on certain criteria. Pros You only have a single database with multiple federations. You can use filtering in the connections to pick the right federation, so any ORM could be used to consume the data. Cons There is no real isolation at that database level. The isolation is enforced programmatically with federations.

    Read the article

  • Efficient algorithm to find a the set of numbers in a range

    - by user133020
    If I have an array of sorted numbers and every object is one of the numbers or multiplication. For example if the sorted array is [1, 2, 7] then the set is {1, 2, 7, 1*2, 1*7, 2*7, 1*2*7}. As you can see if there's n numbers in the sorted array, the size of the set is 2n-1. My question is how can I find for a given sorted array of n numbers all the objects in the set so that the objects is in a given interval. For example if the sorted array is [1, 2, 3 ... 19, 20] what is the most efficient algorithm to find the objects that are larger than 1000 and less than 2500 (without calculating all the 2n-1 objects)?

    Read the article

  • XNA 4.0 - container with content, that can slide (C#)

    - by DijkeMark
    I got an idea, but I got no idea on how to make it. Okay, so here is the deal. I want a container which can contain certain objects (These objects will draw the sprites/graphics). But because of different screen sizes, I want to be able to scale the containers width and height. But I do not want the objects in the container, that go outside of the container, because of the scaling to be visible. Because I want the objects all to be positioned horizontaly to eachother and I want a horizontal sliderbar, so I can slide from left to right within the container. I wonder if anyone could point me in the right direction. Thanks in Advance, Mark

    Read the article

  • PHP parsing XML file with and without namespaces

    - by Mike
    I need to get a XML File into a Database. Thats not the problem. Cant read it, parse it and create some Objects to map to the DB. Problem is, that sometimes the XML File can contain namespaces and sometimes not. Furtermore sometimes there is no namespace defined at all. So what i first got was something like this: <?xml version="1.0" encoding="UTF-8"?> <struct xmlns:b="http://www.w3schools.com/test/"> <objects> <object> <node_1>value1</node_1> <node_2>value2</node_2> <node_3 iso_land="AFG"/> <coords lat="12.00" long="13.00"/> </object> </objects> </struct> And the parsing: $t = $xml->xpath('/objects/object'); foreach($nodes AS $node) { if($t[0]->$node) { $obj->$node = (string) $t[0]->$node; } } Thats fine as long as there are no namespaces. Here comes the XML File with namespaces: <?xml version="1.0" encoding="UTF-8"?> <b:struct xmlns:b="http://www.w3schools.com/test/"> <b:objects> <b:object> <b:node_1>value1</b:node_1> <b:node_2>value2</b:node_2> <b:node_3 iso_land="AFG"/> <b:coords lat="12.00" long="13.00"/> </b:object> </b:objects> </b:struct> I now came up with something like this: $xml = simplexml_load_file("test.xml"); $namespaces = $xml->getNamespaces(TRUE); $ns = count($namespaces) ? 'a:' : ''; $xml->registerXPathNamespace("a", "http://www.w3schools.com/test/"); $nodes = array('node_1', 'node_2'); $obj = new stdClass(); foreach($nodes AS $node) { $t = $xml->xpath('/'.$ns.'objects/'.$ns.'object/'.$ns.$node); if($t[0]) { $obj->$node = (string) $t[0]; } } $t = $xml->xpath('/'.$ns.'objects/'.$ns.'object/'.$ns.'node_3'); if($t[0]) { $obj->iso_land = (string) $t[0]->attributes()->iso_land; } $t = $xml->xpath('/'.$ns.'objects/'.$ns.'object/'.$ns.'coords'); if($t[0]) { $obj->lat = (string) $t[0]->attributes()->lat; $obj->long = (string) $t[0]->attributes()->long; } That works with namespaces and without. But i feel that there must be a better way. Before that i could do something like this: $t = $xml->xpath('/'.$ns.'objects/'.$ns.'object'); foreach($nodes AS $node) { if($t[0]->$node) { $obj->$node = (string) $t[0]->$node; } } But that just wont work with namespaces.

    Read the article

  • Customize baseUrl and baseDir in CKFinder

    - by Nick Petrie
    We use CKEditor and CKFinder for Coldfusion in many of our CMS applications. These apps point to different sites on our server, so we want CKFinder setup to upload files to directories specific to each app. But we one want one shared location for the CKEditor and CKFinder files on the server. In the config.cfm file, we have setup the default baseURL and baseDir like this: config.baseUrl = "http://www.oursite.com/_files/site1/ckfinder_uploads/"; config.baseDir = '\\ourserver01\_files\site1\ckfinder_uploads\'; In the header file for each app, we include the following to instantiate CKEditor and CKFinder (including the jQuery adapter): <script type="text/javascript" src="/shared/ckeditor/ckeditor.js"></script> <script type="text/javascript" src="/shared/ckeditor/adapters/jquery.js"></script> <script type="text/javascript" src="/shared/ckfinder/ckfinder.js"></script> <script type="text/javascript"> $(document).ready(function(){ CKFinder.setupCKEditor( null, '/shared/ckfinder/' ); }); </script> When I open a CKFinder window in one of the apps, it correctly opens to the default baseURL/baseDir. However, how can I override those defaults? I tried changing the CKFinder setupCKEditor function to this following with no luck: CKFinder.setupCKEditor( null, { basePath:'/shared/ckfinder/', baseUrl:"http://www.oursite.com/_files/NEWSITE/ckfinder_uploads/", baseDir:"\\\\ourserver01\\_files\\NEWSITE\\ckfinder_uploads\\" } ); It just ignored this and used the defaults. Thoughts? Thanks!!

    Read the article

  • Share code between projects in tfs 2010

    - by Jimmy Engtröm
    Hi What is the best way to handle code sharing in TFS 2010? We have a couple of Visual studio projects that other Visual Studio projects use. ex: Shared Project Project 1 Solution -Shared Project -Project 1 Project Project 2 Solution -Shared Project -Project 2 Project Also we have Third party code for example: Third Party -Telerik --2009.1.402.35 --2009.02.0701.35 When I open my "Project 1" solution i want my shared code project to be included in that solution. (thats the way we work today). We basically have one TFS Project that contains all the code. Now we want to use it the "right" (?) way, We would like to have Project 1 and 2 in separate TFS solutions. If I for example makes sure we have all our project in the same structure on disk and just add the shared project to my Project 1 solution (even if the projects reside in two different TFS Projects) would that work with builds? How have you solved the problem, I guess we are not the only ones having shared code between projects? Cheers /Jimmy

    Read the article

  • Example of [NSDictionary getObjects:andKeys:]

    - by eagle
    I couldn't find a working example of the method [NSDictionary getObjects:andKeys:] on Google. The only link I could find, doesn't compile. I provided the errors/warnings here in case someone is searching for them. The documentation says that it returns a C array. NSDictionary *myDictionary = ...; id objects[]; // Error: Array size missing in 'objects' id keys[]; // Error: Array size missing in 'keys' [myDictionary getObjects:&objects andKeys:&keys]; for (int i = 0; i < count; i++) { id obj = objects[i]; id key = keys[i]; } . NSDictionary *myDictionary = ...; NSInteger count = [myDictionary count]; id objects[count]; id keys[count]; [myDictionary getObjects:&objects andKeys:&keys]; // Warning: Passing argument 1 of 'getObjects:andKeys:' from incompatible pointer type. for (int i = 0; i < count; i++) { id obj = objects[i]; id key = keys[i]; } Please provide a working example of this method.

    Read the article

  • how to redirect the page when you click the tab( jquery Tab)

    - by kumar
    Hello, I have four jquery tabs in my masterpage.. <div class="floatCenter"></div> <ul> <li><%=Html.ActionLink("A", "index", "shared", new { area = "a" }, new { @id = "a" })%></li> <li><%=Html.ActionLink("B", "index", "shared", new { area = "b" }, new { @id = "b" })%></li> <li><%=Html.ActionLink("C", "index", "shared", new { area = "c" }, new { @id = "c" })%></li> <li><%=Html.ActionLink("D", "index", "shared", new { area = "d" }, new { @id = "d" })%></li> <li><%=Html.ActionLink("E", "index", "shared", new { area = "e" }, new { @id = "e" })%></li> <li><%=Html.ActionLink("F", "index", "shared", new { area = "f" }, new { @id = "f" })%></li> </ul> </div> so when I click user on B tab I should not give permision to the user to access that link? I need to redirect the home ..again.. and also i shoudl not give permision to access the controler actoin in the B tab from URL? thanks

    Read the article

  • How to delete child object in NHibernate?

    - by Mark Struzinski
    I have a parent object which has a one to many relationship with an IList of child objects. What is the best way to delete the child objects? I am not deleting the parent. My parent object contains an IList of child objects. Here is the mapping for the one to many relationship: <bag name="Tiers" cascade="all"> <key column="mismatch_id_no" /> <one-to-many class="TGR_BL.PromoTier,TGR_BL"/> </bag> If I try to remove all objects from the collection using clear(), then call SaveOrUpdate(), I get this exception: System.Data.SqlClient.SqlException: Cannot insert the value NULL into column If I try to delete the child objects individually then remove them from the parent, I get an exception: deleted object would be re-saved by cascade This is my first time dealing with deleting child objects in NHibernate. What am I doing wrong? edit: Just to clarify - I'm NOT trying to delete the parent object, just the child objects. I have the relationship set up as a one to many on the parent. Do I also need to create a many-to-one relationship on the child object mapping?

    Read the article

  • Remove file from git repository (history)

    - by Devenv
    (solved, see bottom of the question body) Looking for this for a long time now, what I have till now is: http://dound.com/2009/04/git-forever-remove-files-or-folders-from-history/ and http://progit.org/book/ch9-7.html Pretty much the same method, but both of them leave objects in pack files... Stuck. What I tried: git filter-branch --index-filter 'git rm --cached --ignore-unmatch file_name' rm -Rf .git/refs/original rm -Rf .git/logs/ git gc Still have files in the pack, and this is how I know it: git verify-pack -v .git/objects/pack/pack-3f8c0...bb.idx | sort -k 3 -n | tail -3 And this: git filter-branch --index-filter "git rm -rf --cached --ignore-unmatch file_name" HEAD rm -rf .git/refs/original/ && git reflog expire --all && git gc --aggressive --prune The same... Tried git clone trick, it removed some of the files (~3000 of them) but the largest files are still there... I have some large legacy files in the repository, ~200M, and I really don't want them there... And I don't want to reset the repository to 0 :( SOLUTION: This is the shortest way to get rid of the files: check .git/packed-refs - my problem was that I had there a refs/remotes/origin/master line for a remote repository, delete it, otherwise git won't remove those files (optional) git verify-pack -v .git/objects/pack/#{pack-name}.idx | sort -k 3 -n | tail -5 - to check for the largest files (optional) git rev-list --objects --all | grep a0d770a97ff0fac0be1d777b32cc67fe69eb9a98 - to check what files those are git filter-branch --index-filter 'git rm --cached --ignore-unmatch file_names' - to remove the file from all revisions rm -rf .git/refs/original/ - to remove git's backup git reflog expire --all --expire='0 days' - to expire all the loose objects (optional) git fsck --full --unreachable - to check if there are any loose objects git repack -A -d - repacking the pack git prune - to finally remove those objects

    Read the article

  • "You have already activated" message even when using bundle exec

    - by juanpastas
    I am installing gems in my Gemfile in shared path as Capistrano does by default, and when I run: bundle exec rake assets:precompile RAILS_ENV=production I get: You have already activated rake 0.9.2.2, but your Gemfile requires rake 10.0.4. Using bundle exec may solve this. See that: cat Gemfile.lock | grep rake returns: rake (>= 0.8.7) rake (10.0.4) This is my gem environment output: - RUBYGEMS VERSION: 1.8.24 - RUBY VERSION: 1.9.3 (2013-06-27 patchlevel 448) [x86_64-linux] - INSTALLATION DIRECTORY: /home/bitnami/my_app/shared/bundle/ruby/1.9.1/ - RUBY EXECUTABLE: /opt/bitnami/ruby/bin/ruby - EXECUTABLE DIRECTORY: /home/bitnami/my_app/shared/bundle/ruby/1.9.1/bin - RUBYGEMS PLATFORMS: - ruby - x86_64-linux - GEM PATHS: - /home/bitnami/my_app/shared/bundle/ruby/1.9.1/ - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => false - :bulk_threshold => 1000 - "gemhome" => "/home/bitnami/my_app/shared/bundle/ruby/1.9.1/" - "gempath" => ["/home/bitnami/my_app/shared/bundle/ruby/1.9.1/"] - REMOTE SOURCES: - http://rubygems.org/ Update which -a rake /opt/bitnami/rvm/bin/rake /opt/bitnami/ruby/bin/rake Update 2 I tried giving full path to rake, but same problem

    Read the article

  • Static Property losing its value intermittently ?

    - by joedotnot
    Is there something fundamentally wrong with the following design, or can anyone see why would the static properties sometimes loose their values ? I have a class library project containing a class AppConfig; this class is consumed by a Webforms project. The skeleton of AppConfig class is as follows: Public Class AppConfig Implements IConfigurationSectionHandler Private Const C_KEY1 As String = "WebConfig.Key.1" Private Const C_KEY2 As String = "WebConfig.Key.2" Private Const C_KEY1_DEFAULT_VALUE as string = "Key1defaultVal" Private Const C_KEY2_DEFAULT_VALUE as string = "Key2defaultVal" Private Shared m_field1 As String Private Shared m_field2 As String Public Shared ReadOnly Property ConfigValue1() As String Get ConfigValue1= m_field1 End Get End Property Public Shared ReadOnly Property ConfigValue2() As String Get ConfigValue2 = m_field2 End Get End Property Public Shared Sub OnApplicationStart() m_field1 = ReadSetting(C_KEY1, C_KEY1_DEFAULT_VALUE) m_field2 = ReadSetting(C_KEY2, C_KEY1_DEFAULT_VALUE) End Sub Public Overloads Shared Function ReadSetting(ByVal key As String, ByVal defaultValue As String) As String Try Dim setting As String = System.Configuration.ConfigurationManager.AppSettings(key) If setting Is Nothing Then ReadSetting = defaultValue Else ReadSetting = setting End If Catch ReadSetting = defaultValue End Try End Function Public Function Create(ByVal parent As Object, ByVal configContext As Object, ByVal section As System.Xml.XmlNode) As Object Implements System.Configuration.IConfigurationSectionHandler.Create Dim objSettings As NameValueCollection Dim objHandler As NameValueSectionHandler objHandler = New NameValueSectionHandler objSettings = CType(objHandler.Create(parent, configContext, section), NameValueCollection) Return 1 End Function End Class The Static Properties get set once on application start, from the Application_Start event of the Global.asax Sub Application_Start(ByVal sender As Object, ByVal e As EventArgs) //Fires when the application is started AppConfig.OnApplicationStart() End Sub Thereafter, whenever we want to access a value in the Web.Config from anywhere, e.g. aspx page code-behind or another class or referenced class, we simply call the static property. For example, AppConfig.ConfigValue1() AppConfig.ConfigValue2() This is turn returns the value stored in the static backing fields m_field1, m_field2 Problem is sometimes these values are empty string, when clearly the Web.Config entry has values. Is there something fundamentally wrong with the above design, or is it reasonable to expect the static properties would keep their value for the life of the Application session?

    Read the article

  • Force sending a user to custom QuerySet.

    - by Jack M.
    I'm trying to secure an application so that users can only see objects which are assigned to them. I've got a custom QuerySet which works for this, but I'm trying to find a way to force the use of this additional functionality. Here is my Model: class Inquiry(models.Model): ts = models.DateTimeField(auto_now_add=True) assigned_to_user = models.ForeignKey(User, blank=True, null=True, related_name="assigned_inquiries") objects = CustomQuerySetManager() class QuerySet(QuerySet): def for_user(self, user): return self.filter(assigned_to_user=user) (The CustomQuerySetManager is documented over here, if it is important.) I'm trying to force everything to use this filtering, so that other methods will raise an exception. For example: Inquiry.objects.all() ## Should raise an exception. Inquiry.objects.filter(pk=69) ## Should raise an exception. Inquiry.objects.for_user(request.user).filter(pk=69) ## Should work. inqs = Inquiry.objects.for_user(request.user) ## Should work. inqs.filter(pk=69) ## Should work. It seems to me that there should be a way to force the security of these objects by allowing only certain users to access them. I am not concerned with how this might impact the admin interface.

    Read the article

  • strange memory error when deleting object from Core Data

    - by llloydxmas
    I have an application that downloads an xml file, parses the file, and creates core data objects while doing so. In the parse code I have a function called 'emptydatacontext' that removes all items from Core Data before creating replacements from the xml file. This method looks like this: -(void) emptyDataContext { NSMutableArray* mutableFetchResults = [CoreDataHelper getObjectsFromContext:@"Condition" :@"road" :NO :managedObjectContext]; NSFetchRequest * allCon = [[NSFetchRequest alloc] init]; [allCon setEntity:[NSEntityDescription entityForName:@"Condition" inManagedObjectContext:managedObjectContext]]; NSError * error = nil; NSArray * conditions = [managedObjectContext executeFetchRequest:allCon error:&error]; [allCon release]; for (NSManagedObject * condition in conditions) { [managedObjectContext deleteObject:condition]; } } The first time this runs it deletes all objects and functions as it should - creating new objects from the xml file. I created a 'update' button that starts the exact same process of retrieving the file the preceeding as it did the first time. All is well until its time to delete the core data objects again. This 'deleteObject' call creates a "EXC_BAD_ACCESS" error each time. This only happens on the second time through. See this image for the debugger window as it appears when walking through the deletion FOR loop on the second iteration. Conditions is the fetched array of 7 objects with the objects below. Condition should be an individual condition. link text As you can see 'condition' does not match any of the objects in the 'conditions' array. I'm sure this is why I'm getting the memory access errors. Just not sure why this fetch (or the FOR) is returning a wrong reference. All the code that successfully performes this function on the first iteration is used in the second but with very different results. Thanks in advance for the help!

    Read the article

  • Texture2D.Bounds.Intersect, but the Bounds never move? - XNA, .Net 4.0

    - by Gineer
    Hi all, I am still shiny new to XNA, so please forgive any stupid question and statements in this post (The added issue is that I am using Visual Studio 2010 with .Net 4.0 which also means very few examples exist out on the web - well, none that I could find easily): I have two 2D objects in a "game" that I am using to learn more about XNA. I need to figure out when these two objects intersect. I noticed that the Texture2D objects has a property named "Bounds" which in turn has a method named "Intersects" which takes a Rectangle (the other Texture2D.Bounds) as an argument. However when you run the code, the objects always intersect even if they are on separate sides of the screen. When I step into the code, I noticed that for the Texture2D Bounds I get 4 parameters back when you mouse over the Bounds and the X, and Y coordinates always read "X = 0, Y = 0" for both objects (hence they always intersect). The thing that confuses me is the fact that the Bounds property is on the Texture rather than on the Position (or Vector2) of the objects. I eventually created a little helper method that takes in the objects and there positions and then calculate whether they intersect, but I'm sure there must be a better way. any suggestions, pointers would be much appreciated. Gineer

    Read the article

  • java: assigning object reference IDs for custom serialization

    - by Jason S
    For various reasons I have a custom serialization where I am dumping some fairly simple objects to a data file. There are maybe 5-10 classes, and the object graphs that result are acyclic and pretty simple (each serialized object has 1 or 2 references to another that are serialized). For example: class Foo { final private long id; public Foo(long id, /* other stuff */) { ... } } class Bar { final private long id; final private Foo foo; public Bar(long id, Foo foo, /* other stuff */) { ... } } class Baz { final private long id; final private List<Bar> barList; public Baz(long id, List<Bar> barList, /* other stuff */) { ... } } The id field is just for the serialization, so that when I am serializing to a file, I can write objects by keeping a record of which IDs have been serialized so far, then for each object checking whether its child objects have been serialized and writing the ones that haven't, finally writing the object itself by writing its data fields and the IDs corresponding to its child objects. What's puzzling me is how to assign id's. I thought about it, and it seems like there are three cases for assigning an ID: dynamically-created objects -- id is assigned from a counter that increments reading objects from disk -- id is assigned from the number stored in the disk file singleton objects -- object is created prior to any dynamically-created object, to represent a singleton object that is always present. How can I handle these properly? I feel like I'm reinventing the wheel and there must be a well-established technique for handling all the cases.

    Read the article

  • Code management in different projects with different svn repositories

    - by uzay95
    First of all I want to tell you what kind of system I have and I want to build on. 1- A Solution (has) a- Shared Class Library project (which is for lots of different solutions) b- Another Class Library project (which is only for this solution) c- Web Application project (which main part of this solution) d- Shared Web Service project (which also serves for different solutions) 2- B Solution (has) a- Shared Class Library project (which is for lots of different solutions) c- Windows Form Application project (which is main part of this solution) d- Web Service project (which also serves for different solutions) and other projects like that.... I am using xp-dev.com as our svn repository server. And I opened different projects for these items (Shared Class Library, Web Service project, Windows Form Application project, Web Application project, Another Class Library project) . I want to do the versioning of all these projects of course. My first question is, should I put each project(one solution) to one svn repository to get their revision number later on? Or should I put each of them to different svn repository and keep( write down) their correct version number that is used to publish/deploy every solution? If I use one svn for each project(Shared Class Lib, Web App, Shared Web Service....) how can I relate the right svn address and version on VS.2010 within the real solution? So, how do you manage your repositories and projects?

    Read the article

  • Normal Redundancy (Double Mirroring) Option Available

    - by TammyBednar
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} The Oracle Database Appliance 2.4 Patch was released last week and provides you an option of ASM normal redundancy (double mirroring) during the initial deployment of the Database Appliance. The default deployment of the Oracle Database Appliance is high redundancy for the +DATA and +RECO disk groups. While there is 12TB of raw shared storage available, the Database Backup Location and Disk Group Redundancy govern how much usable storage is presented after the initial deployment is completed. The Database Backup Location options are Local or External. When the Local Backup Option is selected, this means that 60% of the available shared storage will be allocated for the Fast Recovery Area that contains database backups and archive logs. The External Backup Option will allocate 20% of the available shared storage to the Fast Recovery Area. So, let’s look at an example of High Redundancy and External Backups. Disk Group Redundancy – High --> Triple Mirroring to provide ~4TB of available storage Database Backup Location – External --> 20% of available shared storage allocated to +RECO +DATA = 3.2TB of usable storage, +RECO = 0.8TB of usable storage What about Normal Redundancy with External Backups? Disk Group Redundancy – Normal --> Double Mirroring to provide ~6TB of available storage Database Backup Location – External --> 20% of available shared storage allocated to +RECO +DATA = 4.8TB of usable storage, +RECO = 1.2TB of usable storage As a best practice, we would recommend using Normal Redundancy for your test and/or development Oracle Database Appliances and High Redundancy for production.

    Read the article

  • IXmlSerializable Dictionary

    - by Shimmy
    I was trying to create a generic Dictionary that implements IXmlSerializable. Here is my trial: Sub Main() Dim z As New SerializableDictionary(Of String, String) z.Add("asdf", "asd") Console.WriteLine(z.Serialize) End Sub Result: <?xml version="1.0" encoding="utf-16"?><Entry key="asdf" value="asd" /> I placed a breakpoint on top of the WriteXml method and I see that when it stops, the writer contains no data at all, and IMHO it should contain the root element and the xml declaration. <Serializable()> _ Public Class SerializableDictionary(Of TKey, TValue) : Inherits Dictionary(Of TKey, TValue) : Implements IXmlSerializable Private Const EntryString As String = "Entry" Private Const KeyString As String = "key" Private Const ValueString As String = "value" Private Shared ReadOnly AttributableTypes As Type() = New Type() {GetType(Boolean), GetType(Byte), GetType(Char), GetType(DateTime), GetType(Decimal), GetType(Double), GetType([Enum]), GetType(Guid), GetType(Int16), GetType(Int32), GetType(Int64), GetType(SByte), GetType(Single), GetType(String), GetType(TimeSpan), GetType(UInt16), GetType(UInt32), GetType(UInt64)} Private Shared ReadOnly GetIsAttributable As Predicate(Of Type) = Function(t) AttributableTypes.Contains(t) Private Shared ReadOnly IsKeyAttributable As Boolean = GetIsAttributable(GetType(TKey)) Private Shared ReadOnly IsValueAttributable As Boolean = GetIsAttributable(GetType(TValue)) Private Shared ReadOnly GetElementName As Func(Of Boolean, String) = Function(isKey) If(isKey, KeyString, ValueString) Public Function GetSchema() As System.Xml.Schema.XmlSchema Implements System.Xml.Serialization.IXmlSerializable.GetSchema Return Nothing End Function Public Sub WriteXml(ByVal writer As XmlWriter) Implements IXmlSerializable.WriteXml For Each entry In Me writer.WriteStartElement(EntryString) WriteData(IsKeyAttributable, writer, True, entry.Key) WriteData(IsValueAttributable, writer, False, entry.Value) writer.WriteEndElement() Next End Sub Private Sub WriteData(Of T)(ByVal attributable As Boolean, ByVal writer As XmlWriter, ByVal isKey As Boolean, ByVal value As T) Dim name = GetElementName(isKey) If attributable Then writer.WriteAttributeString(name, value.ToString) Else Dim serializer As New XmlSerializer(GetType(T)) writer.WriteStartElement(name) serializer.Serialize(writer, value) writer.WriteEndElement() End If End Sub Public Sub ReadXml(ByVal reader As XmlReader) Implements IXmlSerializable.ReadXml Dim empty = reader.IsEmptyElement reader.Read() If empty Then Exit Sub Clear() While reader.NodeType <> XmlNodeType.EndElement While reader.NodeType = XmlNodeType.Whitespace reader.Read() Dim key = ReadData(Of TKey)(IsKeyAttributable, reader, True) Dim value = ReadData(Of TValue)(IsValueAttributable, reader, False) Add(key, value) If Not IsKeyAttributable AndAlso Not IsValueAttributable Then reader.ReadEndElement() Else reader.Read() While reader.NodeType = XmlNodeType.Whitespace reader.Read() End While End While reader.ReadEndElement() End While End Sub Private Function ReadData(Of T)(ByVal attributable As Boolean, ByVal reader As XmlReader, ByVal isKey As Boolean) As T Dim name = GetElementName(isKey) Dim type = GetType(T) If attributable Then Return Convert.ChangeType(reader.GetAttribute(name), type) Else Dim serializer As New XmlSerializer(type) While reader.Name <> name reader.Read() End While reader.ReadStartElement(name) Dim value = serializer.Deserialize(reader) reader.ReadEndElement() Return value End If End Function Public Shared Function Serialize(ByVal dictionary As SerializableDictionary(Of TKey, TValue)) As String Dim sb As New StringBuilder(1024) Dim sw As New StringWriter(sb) Dim xs As New XmlSerializer(GetType(SerializableDictionary(Of TKey, TValue))) xs.Serialize(sw, dictionary) sw.Dispose() Return sb.ToString End Function Public Shared Function Deserialize(ByVal xml As String) As SerializableDictionary(Of TKey, TValue) Dim xs As New XmlSerializer(GetType(SerializableDictionary(Of TKey, TValue))) Dim xr As New XmlTextReader(xml, XmlNodeType.Document, Nothing) Return xs.Deserialize(xr) xr.Close() End Function Public Function Serialize() As String Dim sb As New StringBuilder Dim xw = XmlWriter.Create(sb) WriteXml(xw) xw.Close() Return sb.ToString End Function Public Sub Parse(ByVal xml As String) Dim xr As New XmlTextReader(xml, XmlNodeType.Document, Nothing) ReadXml(xr) xr.Close() End Sub End Class

    Read the article

  • IXmlSerializable Dictionary problem

    - by Shimmy
    I was trying to create a generic Dictionary that implements IXmlSerializable. Here is my trial: Sub Main() Dim z As New SerializableDictionary(Of String, String) z.Add("asdf", "asd") Console.WriteLine(z.Serialize) End Sub Result: <?xml version="1.0" encoding="utf-16"?><Entry key="asdf" value="asd" /> I placed a breakpoint on top of the WriteXml method and I see that when it stops, the writer contains no data at all, and IMHO it should contain the root element and the xml declaration. <Serializable()> _ Public Class SerializableDictionary(Of TKey, TValue) : Inherits Dictionary(Of TKey, TValue) : Implements IXmlSerializable Private Const EntryString As String = "Entry" Private Const KeyString As String = "key" Private Const ValueString As String = "value" Private Shared ReadOnly AttributableTypes As Type() = New Type() {GetType(Boolean), GetType(Byte), GetType(Char), GetType(DateTime), GetType(Decimal), GetType(Double), GetType([Enum]), GetType(Guid), GetType(Int16), GetType(Int32), GetType(Int64), GetType(SByte), GetType(Single), GetType(String), GetType(TimeSpan), GetType(UInt16), GetType(UInt32), GetType(UInt64)} Private Shared ReadOnly GetIsAttributable As Predicate(Of Type) = Function(t) AttributableTypes.Contains(t) Private Shared ReadOnly IsKeyAttributable As Boolean = GetIsAttributable(GetType(TKey)) Private Shared ReadOnly IsValueAttributable As Boolean = GetIsAttributable(GetType(TValue)) Private Shared ReadOnly GetElementName As Func(Of Boolean, String) = Function(isKey) If(isKey, KeyString, ValueString) Public Function GetSchema() As System.Xml.Schema.XmlSchema Implements System.Xml.Serialization.IXmlSerializable.GetSchema Return Nothing End Function Public Sub WriteXml(ByVal writer As XmlWriter) Implements IXmlSerializable.WriteXml For Each entry In Me writer.WriteStartElement(EntryString) WriteData(IsKeyAttributable, writer, True, entry.Key) WriteData(IsValueAttributable, writer, False, entry.Value) writer.WriteEndElement() Next End Sub Private Sub WriteData(Of T)(ByVal attributable As Boolean, ByVal writer As XmlWriter, ByVal isKey As Boolean, ByVal value As T) Dim name = GetElementName(isKey) If attributable Then writer.WriteAttributeString(name, value.ToString) Else Dim serializer As New XmlSerializer(GetType(T)) writer.WriteStartElement(name) serializer.Serialize(writer, value) writer.WriteEndElement() End If End Sub Public Sub ReadXml(ByVal reader As XmlReader) Implements IXmlSerializable.ReadXml Dim empty = reader.IsEmptyElement reader.Read() If empty Then Exit Sub Clear() While reader.NodeType <> XmlNodeType.EndElement While reader.NodeType = XmlNodeType.Whitespace reader.Read() Dim key = ReadData(Of TKey)(IsKeyAttributable, reader, True) Dim value = ReadData(Of TValue)(IsValueAttributable, reader, False) Add(key, value) If Not IsKeyAttributable AndAlso Not IsValueAttributable Then reader.ReadEndElement() Else reader.Read() While reader.NodeType = XmlNodeType.Whitespace reader.Read() End While End While reader.ReadEndElement() End While End Sub Private Function ReadData(Of T)(ByVal attributable As Boolean, ByVal reader As XmlReader, ByVal isKey As Boolean) As T Dim name = GetElementName(isKey) Dim type = GetType(T) If attributable Then Return Convert.ChangeType(reader.GetAttribute(name), type) Else Dim serializer As New XmlSerializer(type) While reader.Name <> name reader.Read() End While reader.ReadStartElement(name) Dim value = serializer.Deserialize(reader) reader.ReadEndElement() Return value End If End Function Public Shared Function Serialize(ByVal dictionary As SerializableDictionary(Of TKey, TValue)) As String Dim sb As New StringBuilder(1024) Dim sw As New StringWriter(sb) Dim xs As New XmlSerializer(GetType(SerializableDictionary(Of TKey, TValue))) xs.Serialize(sw, dictionary) sw.Dispose() Return sb.ToString End Function Public Shared Function Deserialize(ByVal xml As String) As SerializableDictionary(Of TKey, TValue) Dim xs As New XmlSerializer(GetType(SerializableDictionary(Of TKey, TValue))) Dim xr As New XmlTextReader(xml, XmlNodeType.Document, Nothing) Return xs.Deserialize(xr) xr.Close() End Function Public Function Serialize() As String Dim sb As New StringBuilder Dim xw = XmlWriter.Create(sb) WriteXml(xw) xw.Close() Return sb.ToString End Function Public Sub Parse(ByVal xml As String) Dim xr As New XmlTextReader(xml, XmlNodeType.Document, Nothing) ReadXml(xr) xr.Close() End Sub End Class

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >