Search Results

Search found 48063 results on 1923 pages for 'how to create dynamically'.

Page 587/1923 | < Previous Page | 583 584 585 586 587 588 589 590 591 592 593 594  | Next Page >

  • Handling Coding Standards at Work (I'm not the boss)

    - by Josh Johnson
    I work on a small team, around 10 devs. We have no coding standards at all. There are certain things that have become the norm but some ways of doing things are completely disparate. My big one is indentation. Some use tabs, some use spaces, some use a different number of spaces, which creates a huge problem. I often end up with conflicts when I merge because someone used their IDE to auto format and they use a different character to indent than I do. I don't care which we use I just want us all to use the same one. Or else I'll open a file and some lines have curly brackets on the same line as the condition while others have them on the next line. Again, I don't mind which one so long as they are all the same. I've brought up the issue of standards to my direct manager, one on one and in group meetings, and he is not overly concerned about it (there are several others who share the same view as myself). I brought up my specific concern about indentation characters and he thought a better solution would be to, "create some kind of script that could convert all that when we push/pull from the repo." I suspect that he doesn't want to change and this solution seems overly complicated and prone to maintenance issues down the road (also, this addresses only one manifestation of a larger issue). Have any of you run into a similar situation at work? If so, how did you handle it? What would be some good points to help sell my boss on standards? Would starting a grass roots movement to create coding standards, among those of us who are interested, be a good idea? Am I being too particular, should I just let it go? Thank you all for your time. Note: Thanks everyone for the great feedback so far! To be clear, I don't want to dictate One Style To Rule Them All. I'm willing to concede my preferred way of doing something in favor of what suits everyone the best. I want consistency and I want this to be a democracy. I want it to be a group decision that everyone agrees on. True, not everyone will get their way, but I'm hoping that everyone will be mature enough to compromise for the betterment of the group. Note 2: Some people are getting caught up in the two examples I gave above. I'm more after the heart of the matter. It manifests itself with many examples: naming conventions, huge functions that should be broken up, should something go in a util or service, should something be a constant or injected, should we all use different versions of a dependency or the same, should an interface be used for this case, how should unit tests be set up, what should be unit tested, (Java specific) should we use annotations or external config. I could go on.

    Read the article

  • I can't start mysql in ubuntu 11.04

    - by shomid
    I downloaded following files of Oracle.com: MySQL-server-5.5.28-1.linux2.6.i386.rpm<br/> MySQL-client-5.5.28-1.linux2.6.i386.rpm<br/> MySQL-shared-5.5.28-1.linux2.6.i386.rpm<br/> then with "alien -i" command installing rpm packages and when starting mysql get following error: Starting MySQL<br/> .... * The server quit without updating PID file (/var/lib/mysql/omid-desktop.pid). error log: 121117 13:21:30 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 121117 13:21:30 [Note] Plugin 'FEDERATED' is disabled. /usr/sbin/mysqld: Table 'mysql.plugin' doesn't exist 121117 13:21:30 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 121117 13:21:30 InnoDB: The InnoDB memory heap is disabled 121117 13:21:30 InnoDB: Mutexes and rw_locks use InnoDB's own implementation 121117 13:21:30 InnoDB: Compressed tables use zlib 1.2.3 121117 13:21:30 InnoDB: Using Linux native AIO 121117 13:21:30 InnoDB: Initializing buffer pool, size = 128.0M 121117 13:21:30 InnoDB: Completed initialization of buffer pool 121117 13:21:30 InnoDB: highest supported file format is Barracuda. 121117 13:21:30 InnoDB: Waiting for the background threads to start 121117 13:21:31 InnoDB: 1.1.8 started; log sequence number 1595675 121117 13:21:31 [Note] Recovering after a crash using mysql-bin 121117 13:21:31 [Note] Starting crash recovery... 121117 13:21:31 [Note] Crash recovery finished. 121117 13:21:31 [Note] Server hostname (bind-address): '0.0.0.0'; port: 3306 121117 13:21:31 [Note] - '0.0.0.0' resolves to '0.0.0.0'; 121117 13:21:31 [Note] Server socket created on IP: '0.0.0.0'. 121117 13:21:31 [ERROR] Fatal error: Can't open and lock privilege tables: Table 'mysql.host' doesn't exist 121117 13:21:31 mysqld_safe mysqld from pid file /var/lib/mysql/omid-desktop.pid ended 121117 13:25:38 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 121117 13:25:38 [Note] Plugin 'FEDERATED' is disabled. /usr/sbin/mysqld: Table 'mysql.plugin' doesn't exist 121117 13:25:38 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 121117 13:25:38 InnoDB: The InnoDB memory heap is disabled 121117 13:25:38 InnoDB: Mutexes and rw_locks use InnoDB's own implementation 121117 13:25:38 InnoDB: Compressed tables use zlib 1.2.3 121117 13:25:38 InnoDB: Using Linux native AIO 121117 13:25:38 InnoDB: Initializing buffer pool, size = 128.0M 121117 13:25:38 InnoDB: Completed initialization of buffer pool 121117 13:25:38 InnoDB: highest supported file format is Barracuda. 121117 13:25:38 InnoDB: Waiting for the background threads to start 121117 13:25:39 InnoDB: 1.1.8 started; log sequence number 1595675 /usr/sbin/mysqld: Too many arguments (first extra is 'start'). Use --verbose --help to get a list of available options 121117 13:25:39 [ERROR] Aborting 121117 13:25:39 InnoDB: Starting shutdown... 121117 13:25:40 InnoDB: Shutdown completed; log sequence number 1595675 121117 13:25:40 [Note] /usr/sbin/mysqld: Shutdown complete

    Read the article

  • Java Cloud Service Integration using Web Service Data Control

    - by Jani Rautiainen
    Java Cloud Service (JCS) provides a platform to develop and deploy business applications in the cloud. In Fusion Applications Cloud deployments customers do not have the option to deploy custom applications developed with JDeveloper to ensure the integrity and supportability of the hosted application service. Instead the custom applications can be deployed to the JCS and integrated to the Fusion Application Cloud instance.This series of articles will go through the features of JCS, provide end-to-end examples on how to develop and deploy applications on JCS and how to integrate them with the Fusion Applications instance.In this article a custom application integrating with Fusion Application using Web Service Data Control will be implemented. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Pre-requisites Access to Cloud instance In order to deploy the application access to a JCS instance is needed, a free trial JCS instance can be obtained from Oracle Cloud site. To register you will need a credit card even if the credit card will not be charged. To register simply click "Try it" and choose the "Java" option. The confirmation email will contain the connection details. See this video for example of the registration. Once the request is processed you will be assigned 2 service instances; Java and Database. Applications deployed to the JCS must use Oracle Database Cloud Service as their underlying database. So when JCS instance is created a database instance is associated with it using a JDBC data source. The cloud services can be monitored and managed through the web UI. For details refer to Getting Started with Oracle Cloud. JDeveloper JDeveloper contains Cloud specific features related to e.g. connection and deployment. To use these features download the JDeveloper from JDeveloper download site by clicking the “Download JDeveloper 11.1.1.7.1 for ADF deployment on Oracle Cloud” link, this version of JDeveloper will have the JCS integration features that will be used in this article. For versions that do not include the Cloud integration features the Oracle Java Cloud Service SDK or the JCS Java Console can be used for deployment. For details on installing and configuring the JDeveloper refer to the installation guide. For details on SDK refer to Using the Command-Line Interface to Monitor Oracle Java Cloud Service and Using the Command-Line Interface to Manage Oracle Java Cloud Service. Create Application In this example the “JcsWsDemo” application created in the “Java Cloud Service Integration using Web Service Proxy” article is used as the base. Create Web Service Data Control In this example we will use a Web Service Data Control to integrate with Credit Rule Service in Fusion Applications. The data control will be used to query data from Fusion Applications using a web service call and present the data in a table. To generate the data control choose the “Model” project and navigate to "New -> All Technologies -> Business Tier -> Data Controls -> Web Service Data Control" and enter following: Name: CreditRuleServiceDC URL: https://ic-[POD].oracleoutsourcing.com/icCnSetupCreditRulesPublicService/CreditRuleService?WSDL Service: {{http://xmlns.oracle.com/apps/incentiveCompensation/cn/creditSetup/creditRule/creditRuleService/}CreditRuleService On step 2 select the “findRule” operation: Skip step 3 and on step 4 define the credentials to access the service. Do note that in this example these credentials are only used if testing locally, for JCS deployment credentials need to be manually updated on the EAR file: Click “Finish” and the proxy generation is done. Creating UI In order to use the data control we will need to populate complex objects FindCriteria and FindControl. For simplicity in this example we will create logic in a managed bean that populates the objects. Open “JcsWsDemoBean.java” and add the following logic: Map findCriteria; Map findControl; public void setFindCriteria(Map findCriteria) { this.findCriteria = findCriteria; } public Map getFindCriteria() { findCriteria = new HashMap(); findCriteria.put("fetchSize",10); findCriteria.put("fetchStart",0); return findCriteria; } public void setFindControl(Map findControl) { this.findControl = findControl; } public Map getFindControl() { findControl = new HashMap(); return findControl; } Open “JcsWsDemo.jspx”, navigate to “Data Controls -> CreditRuleServiceDC -> findRule(Object, Object) -> result” and drag and drop the “result” node into the “af:form” element in the page: On the “Edit Table Columns” remove all columns except “RuleId” and “Name”: On the “Edit Action Binding” window displayed enter reference to the java class created above by selecting “#{JcsWsDemoBean.findCriteria}”: Also define the value for the “findControl” by selecting “#{JcsWsDemoBean.findControl}”. Deploy to JCS For WS DC the authentication details need to be updated on the connection details before deploying. Open “connections.xml” by navigating “Application Resources -> Descriptors -> ADF META-INF -> connections.xml”: Change the user name and password entry from: <soap username="transportUserName" password="transportPassword" To match the access details for the target environment. Follow the same steps as documented in previous article ”Java Cloud Service ADF Web Application”. Once deployed the application can be accessed with URL: https://java-[identity domain].java.[data center].oraclecloudapps.com/JcsWsDemo-ViewController-context-root/faces/JcsWsDemo.jspx When accessed the first 10 rules in the system are displayed: Summary In this article we learned how to integrate with Fusion Applications using a Web Service Data Control in JCS. In future articles various other integration techniques will be covered. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";}

    Read the article

  • How to archive data from a table to a local or remote database in SQL 2005 and SQL 2008

    - by simonsabin
    Often you have the need to archive data from a table. This leads to a number of challenges 1. How can you do it without impacting users 2. How can I make it transactionally consistent, i.e. the data I put in the archive is the data I remove from the main table 3. How can I get it to perform well Points 1 is very much tied to point 3. If it doesn't perform well then the delete of data is going to cause lots of locks and thus potentially blocking. For points 1 and 3 refer to my previous posts DELETE-TOP-x-rows-avoiding-a-table-scan and UPDATE-and-DELETE-TOP-and-ORDER-BY---Part2. In essence you need to be removing small chunks of data from your table and you want to do that avoiding a table scan. So that deals with the delete approach but archiving is about inserting that data somewhere else. Well in SQL 2008 they introduced a new feature INSERT over DML (Data Manipulation Language, i.e. SQL statements that change data), or composable DML. The ability to nest DML statements within themselves, so you can past the results of an insert to an update to a merge. I've mentioned this before here SQL-Server-2008---MERGE-and-optimistic-concurrency. This feature is currently limited to being able to consume the results of a DML statement in an INSERT statement. There are many restrictions which you can find here http://msdn.microsoft.com/en-us/library/ms177564.aspx look for the section "Inserting Data Returned From an OUTPUT Clause Into a Table" Even with the restrictions what we can do is consume the OUTPUT from a DELETE and INSERT the results into a table in another database. Note that in BOL it refers to not being able to use a remote table, remote means a table on another SQL instance. To show this working use this SQL to setup two databases foo and fooArchive create database foo go --create the source table fred in database foo select * into foo..fred from sys.objects go create database fooArchive go if object_id('fredarchive',DB_ID('fooArchive')) is null begin     select getdate() ArchiveDate,* into fooArchive..FredArchive from sys.objects where 1=2       end go And then we can use this simple statement to archive the data insert into fooArchive..FredArchive select getdate(),d.* from (delete top (1)         from foo..Fred         output deleted.*) d         go In this statement the delete can be any delete statement you wish so if you are deleting by ids or a range of values then you can do that. Refer to the DELETE-TOP-x-rows-avoiding-a-table-scan post to ensure that your delete is going to perform. The last thing you want to do is to perform 100 deletes each with 5000 records for each of those deletes to do a table scan. For a solution that works for SQL2005 or if you want to archive to a different server then you can use linked servers or SSIS. This example shows how to do it with linked servers. [ONARC-LAP03] is the source server. begin transaction insert into fooArchive..FredArchive select getdate(),d.* from openquery ([ONARC-LAP03],'delete top (1)                     from foo..Fred                     output deleted.*') d commit transaction and to prove the transactions work try, you should get the same number of records before and after. select (select count(1) from foo..Fred) fred        ,(select COUNT(1) from fooArchive..FredArchive ) fredarchive   begin transaction insert into fooArchive..FredArchive select getdate(),d.* from openquery ([ONARC-LAP03],'delete top (1)                     from foo..Fred                     output deleted.*') d rollback transaction   select (select count(1) from foo..Fred) fred        ,(select COUNT(1) from fooArchive..FredArchive ) fredarchive The transactions are very important with this solution. Look what happens when you don't have transactions and an error occurs   select (select count(1) from foo..Fred) fred        ,(select COUNT(1) from fooArchive..FredArchive ) fredarchive   insert into fooArchive..FredArchive select getdate(),d.* from openquery ([ONARC-LAP03],'delete top (1)                     from foo..Fred                     output deleted.*                     raiserror (''Oh doo doo'',15,15)') d                     select (select count(1) from foo..Fred) fred        ,(select COUNT(1) from fooArchive..FredArchive ) fredarchive Before running this think what the result would be. I got it wrong. What seems to happen is that the remote query is executed as a transaction, the error causes that to rollback. However the results have already been sent to the client and so get inserted into the

    Read the article

  • Loading SpriteFont through a different class than Game.cs

    - by MintyAnt
    I am trying to load up a single SpriteFont to print some debug information. In our current game, we load up both Textures and Music through a ResourceManager. They are both loaded with a filestream, and thus do not require Content.Load SoundEffect soundEffect = SoundEffect.FromStream( fs ); Since this ResourceManager does not inherit from Game or is like Game.cs, I cannot use the usual method: SpriteFont spriteFont = Content.Load<SpriteFont>(resource.Key.Item2); Anyone have any idea how I can either: -Load the SpriteFont a different way -Create my own Contentmanager

    Read the article

  • Custom Model Binding of IEnumerable Properties in ASP.Net MVC 2

    - by Doug Lampe
    MVC 2 provides a GREAT feature for dealing with enumerable types.  Let's say you have an object with a parent/child relationship and you want to allow users to modify multiple children at the same time.  You can simply use the following syntax for any indexed enumerables (arrays, generic lists, etc.) and then your values will bind to your enumerable model properties. 1: <% using (Html.BeginForm("TestModelParameter", "Home")) 2: { %> 3: < table > 4: < tr >< th >ID</th><th>Name</th><th>Description</th></tr> 5: <% for (int i = 0; i < Model.Items.Count; i++) 6: { %> 7: < tr > 8: < td > 9: <%= i %> 10: </ td > 11: < td > 12: <%= Html.TextBoxFor(m => m.Items[i].Name) %> 13: </ td > 14: < td > 15: <%= Model.Items[i].Description %> 16: </ td > 17: </ tr > 18: <% } %> 19: </ table > 20: < input type ="submit" /> 21: <% } %> Then just update your model either by passing it into your action method as a parameter or explicitly with UpdateModel/TryUpdateModel. 1: public ActionResult TestTryUpdate() 2: { 3: ContainerModel model = new ContainerModel(); 4: TryUpdateModel(model); 5:   6: return View("Test", model); 7: } 8:   9: public ActionResult TestModelParameter(ContainerModel model) 10: { 11: return View("Test", model); 12: } Simple right?  Well, not quite.  The problem is the DefaultModelBinder and how it sets properties.  In this case our model has a property that is a generic list (Items).  The first bad thing the model binder does is create a new instance of the list.  This can be fixed by making the property truly read-only by removing the set accessor.  However this won't help because this behaviour continues.  As the model binder iterates through the items to "set" their values, it creates new instances of them as well.  This means you lose any information not passed via the UI to your controller so in the examplel above the "Description" property would be blank for each item after the form posts. One solution for this is custom model binding.  I have put together a solution which allows you to retain the structure of your model.  Model binding is a somewhat advanced concept so you may need to do some additional research to really understand what is going on here, but the code is fairly simple.  First we will create a binder for the parent object which will retain the state of the parent as well as some information on which children have already been bound. 1: public class ContainerModelBinder : DefaultModelBinder 2: { 3: /// <summary> 4: /// Gets an instance of the model to be used to bind child objects. 5: /// </summary> 6: public ContainerModel Model { get; private set; } 7:   8: /// <summary> 9: /// Gets a list which will be used to track which items have been bound. 10: /// </summary> 11: public List<ItemModel> BoundItems { get; private set; } 12:   13: public ContainerModelBinder() 14: { 15: BoundItems = new List<ItemModel>(); 16: } 17:   18: protected override object CreateModel(ControllerContext controllerContext, ModelBindingContext bindingContext, Type modelType) 19: { 20: // Set the Model property so child binders can find children. 21: Model = base.CreateModel(controllerContext, bindingContext, modelType) as ContainerModel; 22:   23: return Model; 24: } 25: } Next we will create the child binder and have it point to the parent binder to get instances of the child objects.  Note that this only works if there is only one property of type ItemModel in the parent class since the property to find the item in the parent is hard coded. 1: public class ItemModelBinder : DefaultModelBinder 2: { 3: /// <summary> 4: /// Gets the parent binder so we can find objects in the parent's collection 5: /// </summary> 6: public ContainerModelBinder ParentBinder { get; private set; } 7: 8: public ItemModelBinder(ContainerModelBinder containerModelBinder) 9: { 10: ParentBinder = containerModelBinder; 11: } 12:   13: protected override object CreateModel(ControllerContext controllerContext, ModelBindingContext bindingContext, Type modelType) 14: { 15: // Find the item in the parent collection and add it to the bound items list. 16: ItemModel item = ParentBinder.Model.Items.FirstOrDefault(i => !ParentBinder.BoundItems.Contains(i)); 17: ParentBinder.BoundItems.Add(item); 18: 19: return item; 20: } 21: } Finally, we will register these binders in Global.asax.cs so they will be used to bind the classes. 1: protected void Application_Start() 2: { 3: AreaRegistration.RegisterAllAreas(); 4:   5: ContainerModelBinder containerModelBinder = new ContainerModelBinder(); 6: ModelBinders.Binders.Add(typeof(ContainerModel), containerModelBinder); 7: ModelBinders.Binders.Add(typeof(ItemModel), new ItemModelBinder(containerModelBinder)); 8:   9: RegisterRoutes(RouteTable.Routes); 10: } I'm sure some of my fellow geeks will comment that this could be done more efficiently by simply rewriting some of the methods of the default model binder to get the same desired behavior.  I like my method shown here because it extends the binder class instead of modifying it so it minimizes the potential for unforseen problems. In a future post (if I ever get around to it) I will explore creating a generic version of these binders.

    Read the article

  • XNA running slow when making a texture

    - by Anthony
    I'm using XNA to test an image analysis algorithm for a robot. I made a simple 3D world that has a grass, a robot, and white lines (that are represent the course). The image analysis algorithm is a modification of the Hough line detection algorithm. I have the game render 2 camera views to a render target in memory. One camera is a top down view of the robot going around the course, and the second camera is the view from the robot's perspective as it moves along. I take the rendertarget of the robot camera and convert it to a Color[,] so that I can do image analysis on it. private Color[,] TextureTo2DArray(Texture2D texture, Color[] colors1D, Color[,] colors2D) { texture.GetData(colors1D); for (int x = 0; x < texture.Width; x++) { for (int y = 0; y < texture.Height; y++) { colors2D[x, y] = colors1D[x + (y * texture.Width)]; } } return colors2D; } I want to overlay the results of the image analysis on the robot camera view. The first part of the image analysis is finding the white pixels. When I find the white pixels I create a bool[,] array showing which pixels were white and which were black. Then I want to convert it back into a texture so that I can overlay on the robot view. When I try to create the new texture showing which ones pixels were white, then the game goes super slow (around 10 hz). Can you give me some pointers as to what to do to make the game go faster. If I comment out this algorithm, then it goes back up to 60 hz. private Texture2D GenerateTexturesFromBoolArray(bool[,] boolArray,Color[] colorMap, Texture2D textureToModify) { for(int i =0;i < screenWidth;i++) { for(int j =0;j<screenHeight;j++) { if (boolArray[i, j] == true) { colorMap[i+(j*screenWidth)] = Color.Red; } else { colorMap[i + (j * screenWidth)] = Color.Transparent; } } } textureToModify.SetData<Color>(colorMap); return textureToModify; } Each Time I run draw, I must set the texture to null, so that I can modify it. public override void Draw(GameTime gameTime) { Vector2 topRightVector = ((SimulationMain)Game).spriteRectangleManager.topRightVector; Vector2 scaleFactor = ((SimulationMain)Game).config.scaleFactorScreenSizeToWindow; this.spriteBatch.Begin(); // Start the 2D drawing this.spriteBatch.Draw(this.textureFindWhite, topRightVector, null, Color.White, 0, Vector2.Zero, scaleFactor, SpriteEffects.None, 0); this.spriteBatch.End(); // Stop drawing. GraphicsDevice.Textures[0] = null; } Thanks for the help, Anthony G.

    Read the article

  • Download PowerCommands for VS 2008

    - by Editor
    PowerCommands is a set of useful extensions for the Visual Studio 2008 adding additional functionality to various areas of the IDE. The source code is included and requires the VS SDK for VS 2008 to allow modification of functionality or as a reference to create additional custom PowerCommand extensions. Visit the [...]

    Read the article

  • 301 redirect from "/index.html" to root if index.html not exist

    - by Andrij Muzychka
    Can I create 301 redirect from "index.html" to root directory if file "index.html" not exist? For example: link "http://example.com/index.html" show "404 Error" page. I need 301 redirect to root directory: "http://example.com/" in .htaccess I add rule: Options +FollowSymLinks RewriteCond %{THE_REQUEST} ^.*/index.html RewriteRule ^(.*)index.html$ http://example.com/$1 [R=301,L] but it doesn't work. Can you help me solve this problem?

    Read the article

  • How can i use JIRA for project management with Green Hopper

    - by user22
    I am thinking of using JIRA + GreenHopper for my project management. I have seen that Green Hopper is for making User stories , sprints. I am not able to find how do i need to add tasks , or how to break user stories in to sub stoires. DO i first need to create project in JIRA and then use Green Hopper or i can use use Green Hopper as stand alone for project management. I am thinking of JIRA as issue tracker not project management.

    Read the article

  • Using Stub Objects

    - by user9154181
    Having told the long and winding tale of where stub objects came from and how we use them to build Solaris, I'd like to focus now on the the nuts and bolts of building and using them. The following new features were added to the Solaris link-editor (ld) to support the production and use of stub objects: -z stub This new command line option informs ld that it is to build a stub object rather than a normal object. In this mode, it accepts the same command line arguments as usual, but will quietly ignore any objects and sharable object dependencies. STUB_OBJECT Mapfile Directive In order to build a stub version of an object, its mapfile must specify the STUB_OBJECT directive. When producing a non-stub object, the presence of STUB_OBJECT causes the link-editor to perform extra validation to ensure that the stub and non-stub objects will be compatible. ASSERT Mapfile Directive All data symbols exported from the object must have an ASSERT symbol directive in the mapfile that declares them as data and supplies the size, binding, bss attributes, and symbol aliasing details. When building the stub objects, the information in these ASSERT directives is used to create the data symbols. When building the real object, these ASSERT directives will ensure that the real object matches the linking interface presented by the stub. Although ASSERT was added to the link-editor in order to support stub objects, they are a general purpose feature that can be used independently of stub objects. For instance you might choose to use an ASSERT directive if you have a symbol that must have a specific address in order for the object to operate properly and you want to automatically ensure that this will always be the case. The material presented here is derived from a document I originally wrote during the development effort, which had the dual goals of providing supplemental materials for the stub object PSARC case, and as a set of edits that were eventually applied to the Oracle Solaris Linker and Libraries Manual (LLM). The Solaris 11 LLM contains this information in a more polished form. Stub Objects A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be used at runtime. However, an application can be built against a stub object, where the stub object provides the real object name to be used at runtime, and then use the real object at runtime. When building a stub object, the link-editor ignores any object or library files specified on the command line, and these files need not exist in order to build a stub. Since the compilation step can be omitted, and because the link-editor has relatively little work to do, stub objects can be built very quickly. Stub objects can be used to solve a variety of build problems: Speed Modern machines, using a version of make with the ability to parallelize operations, are capable of compiling and linking many objects simultaneously, and doing so offers significant speedups. However, it is typical that a given object will depend on other objects, and that there will be a core set of objects that nearly everything else depends on. It is necessary to impose an ordering that builds each object before any other object that requires it. This ordering creates bottlenecks that reduce the amount of parallelization that is possible and limits the overall speed at which the code can be built. Complexity/Correctness In a large body of code, there can be a large number of dependencies between the various objects. The makefiles or other build descriptions for these objects can become very complex and difficult to understand or maintain. The dependencies can change as the system evolves. This can cause a given set of makefiles to become slightly incorrect over time, leading to race conditions and mysterious rare build failures. Dependency Cycles It might be desirable to organize code as cooperating shared objects, each of which draw on the resources provided by the other. Such cycles cannot be supported in an environment where objects must be built before the objects that use them, even though the runtime linker is fully capable of loading and using such objects if they could be built. Stub shared objects offer an alternative method for building code that sidesteps the above issues. Stub objects can be quickly built for all the shared objects produced by the build. Then, all the real shared objects and executables can be built in parallel, in any order, using the stub objects to stand in for the real objects at link-time. Afterwards, the executables and real shared objects are kept, and the stub shared objects are discarded. Stub objects are built from a mapfile, which must satisfy the following requirements. The mapfile must specify the STUB_OBJECT directive. This directive informs the link-editor that the object can be built as a stub object, and as such causes the link-editor to perform validation and sanity checking intended to guarantee that an object and its stub will always provide identical linking interfaces. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data exported from the object must have an ASSERT symbol attribute in the mapfile to specify the symbol type, size, and bss attributes. In the case where there are multiple symbols that reference the same data, the ASSERT for one of these symbols must specify the TYPE and SIZE attributes, while the others must use the ALIAS attribute to reference this primary symbol. Given such a mapfile, the stub and real versions of the shared object can be built using the same command line for each, adding the '-z stub' option to the link for the stub object, and omiting the option from the link for the real object. To demonstrate these ideas, the following code implements a shared object named idx5, which exports data from a 5 element array of integers, with each element initialized to contain its zero-based array index. This data is available as a global array, via an alternative alias data symbol with weak binding, and via a functional interface. % cat idx5.c int _idx5[5] = { 0, 1, 2, 3, 4 }; #pragma weak idx5 = _idx5 int idx5_func(int index) { if ((index 4)) return (-1); return (_idx5[index]); } A mapfile is required to describe the interface provided by this shared object. % cat mapfile $mapfile_version 2 STUB_OBJECT; SYMBOL_SCOPE { _idx5 { ASSERT { TYPE=data; SIZE=4[5] }; }; idx5 { ASSERT { BINDING=weak; ALIAS=_idx5 }; }; idx5_func; local: *; }; The following main program is used to print all the index values available from the idx5 shared object. % cat main.c #include <stdio.h> extern int _idx5[5], idx5[5], idx5_func(int); int main(int argc, char **argv) { int i; for (i = 0; i The following commands create a stub version of this shared object in a subdirectory named stublib. elfdump is used to verify that the resulting object is a stub. The command used to build the stub differs from that of the real object only in the addition of the -z stub option, and the use of a different output file name. This demonstrates the ease with which stub generation can be added to an existing makefile. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o stublib/libidx5.so.1 -zstub % ln -s libidx5.so.1 stublib/libidx5.so % elfdump -d stublib/libidx5.so | grep STUB [11] FLAGS_1 0x4000000 [ STUB ] The main program can now be built, using the stub object to stand in for the real shared object, and setting a runpath that will find the real object at runtime. However, as we have not yet built the real object, this program cannot yet be run. Attempts to cause the system to load the stub object are rejected, as the runtime linker knows that stub objects lack the actual code and data found in the real object, and cannot execute. % cc main.c -L stublib -R '$ORIGIN/lib' -lidx5 -lc % ./a.out ld.so.1: a.out: fatal: libidx5.so.1: open failed: No such file or directory Killed % LD_PRELOAD=stublib/libidx5.so.1 ./a.out ld.so.1: a.out: fatal: stublib/libidx5.so.1: stub shared object cannot be used at runtime Killed We build the real object using the same command as we used to build the stub, omitting the -z stub option, and writing the results to a different file. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o lib/libidx5.so.1 Once the real object has been built in the lib subdirectory, the program can be run. % ./a.out [0] 0 0 0 [1] 1 1 1 [2] 2 2 2 [3] 3 3 3 [4] 4 4 4 Mapfile Changes The version 2 mapfile syntax was extended in a number of places to accommodate stub objects. Conditional Input The version 2 mapfile syntax has the ability conditionalize mapfile input using the $if control directive. As you might imagine, these directives are used frequently with ASSERT directives for data, because a given data symbol will frequently have a different size in 32 or 64-bit code, or on differing hardware such as x86 versus sparc. The link-editor maintains an internal table of names that can be used in the logical expressions evaluated by $if and $elif. At startup, this table is initialized with items that describe the class of object (_ELF32 or _ELF64) and the type of the target machine (_sparc or _x86). We found that there were a small number of cases in the Solaris code base in which we needed to know what kind of object we were producing, so we added the following new predefined items in order to address that need: NameMeaning ...... _ET_DYNshared object _ET_EXECexecutable object _ET_RELrelocatable object ...... STUB_OBJECT Directive The new STUB_OBJECT directive informs the link-editor that the object described by the mapfile can be built as a stub object. STUB_OBJECT; A stub shared object is built entirely from the information in the mapfiles supplied on the command line. When the -z stub option is specified to build a stub object, the presence of the STUB_OBJECT directive in a mapfile is required, and the link-editor uses the information in symbol ASSERT attributes to create global symbols that match those of the real object. When the real object is built, the presence of STUB_OBJECT causes the link-editor to verify that the mapfiles accurately describe the real object interface, and that a stub object built from them will provide the same linking interface as the real object it represents. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data in the object is required to have an ASSERT attribute that specifies the symbol type and size. If the ASSERT BIND attribute is not present, the link-editor provides a default assertion that the symbol must be GLOBAL. If the ASSERT SH_ATTR attribute is not present, or does not specify that the section is one of BITS or NOBITS, the link-editor provides a default assertion that the associated section is BITS. All data symbols that describe the same address and size are required to have ASSERT ALIAS attributes specified in the mapfile. If aliased symbols are discovered that do not have an ASSERT ALIAS specified, the link fails and no object is produced. These rules ensure that the mapfiles contain a description of the real shared object's linking interface that is sufficient to produce a stub object with a completely compatible linking interface. SYMBOL_SCOPE/SYMBOL_VERSION ASSERT Attribute The SYMBOL_SCOPE and SYMBOL_VERSION mapfile directives were extended with a symbol attribute named ASSERT. The syntax for the ASSERT attribute is as follows: ASSERT { ALIAS = symbol_name; BINDING = symbol_binding; TYPE = symbol_type; SH_ATTR = section_attributes; SIZE = size_value; SIZE = size_value[count]; }; The ASSERT attribute is used to specify the expected characteristics of the symbol. The link-editor compares the symbol characteristics that result from the link to those given by ASSERT attributes. If the real and asserted attributes do not agree, a fatal error is issued and the output object is not created. In normal use, the link editor evaluates the ASSERT attribute when present, but does not require them, or provide default values for them. The presence of the STUB_OBJECT directive in a mapfile alters the interpretation of ASSERT to require them under some circumstances, and to supply default assertions if explicit ones are not present. See the definition of the STUB_OBJECT Directive for the details. When the -z stub command line option is specified to build a stub object, the information provided by ASSERT attributes is used to define the attributes of the global symbols provided by the object. ASSERT accepts the following: ALIAS Name of a previously defined symbol that this symbol is an alias for. An alias symbol has the same type, value, and size as the main symbol. The ALIAS attribute is mutually exclusive to the TYPE, SIZE, and SH_ATTR attributes, and cannot be used with them. When ALIAS is specified, the type, size, and section attributes are obtained from the alias symbol. BIND Specifies an ELF symbol binding, which can be any of the STB_ constants defined in <sys/elf.h>, with the STB_ prefix removed (e.g. GLOBAL, WEAK). TYPE Specifies an ELF symbol type, which can be any of the STT_ constants defined in <sys/elf.h>, with the STT_ prefix removed (e.g. OBJECT, COMMON, FUNC). In addition, for compatibility with other mapfile usage, FUNCTION and DATA can be specified, for STT_FUNC and STT_OBJECT, respectively. TYPE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SH_ATTR Specifies attributes of the section associated with the symbol. The section_attributes that can be specified are given in the following table: Section AttributeMeaning BITSSection is not of type SHT_NOBITS NOBITSSection is of type SHT_NOBITS SH_ATTR is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SIZE Specifies the expected symbol size. SIZE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. The syntax for the size_value argument is as described in the discussion of the SIZE attribute below. SIZE The SIZE symbol attribute existed before support for stub objects was introduced. It is used to set the size attribute of a given symbol. This attribute results in the creation of a symbol definition. Prior to the introduction of the ASSERT SIZE attribute, the value of a SIZE attribute was always numeric. While attempting to apply ASSERT SIZE to the objects in the Solaris ON consolidation, I found that many data symbols have a size based on the natural machine wordsize for the class of object being produced. Variables declared as long, or as a pointer, will be 4 bytes in size in a 32-bit object, and 8 bytes in a 64-bit object. Initially, I employed the conditional $if directive to handle these cases as follows: $if _ELF32 foo { ASSERT { TYPE=data; SIZE=4 } }; bar { ASSERT { TYPE=data; SIZE=20 } }; $elif _ELF64 foo { ASSERT { TYPE=data; SIZE=8 } }; bar { ASSERT { TYPE=data; SIZE=40 } }; $else $error UNKNOWN ELFCLASS $endif I found that the situation occurs frequently enough that this is cumbersome. To simplify this case, I introduced the idea of the addrsize symbolic name, and of a repeat count, which together make it simple to specify machine word scalar or array symbols. Both the SIZE, and ASSERT SIZE attributes support this syntax: The size_value argument can be a numeric value, or it can be the symbolic name addrsize. addrsize represents the size of a machine word capable of holding a memory address. The link-editor substitutes the value 4 for addrsize when building 32-bit objects, and the value 8 when building 64-bit objects. addrsize is useful for representing the size of pointer variables and C variables of type long, as it automatically adjusts for 32 and 64-bit objects without requiring the use of conditional input. The size_value argument can be optionally suffixed with a count value, enclosed in square brackets. If count is present, size_value and count are multiplied together to obtain the final size value. Using this feature, the example above can be written more naturally as: foo { ASSERT { TYPE=data; SIZE=addrsize } }; bar { ASSERT { TYPE=data; SIZE=addrsize[5] } }; Exported Global Data Is Still A Bad Idea As you can see, the additional plumbing added to the Solaris link-editor to support stub objects is minimal. Furthermore, about 90% of that plumbing is dedicated to handling global data. We have long advised against global data exported from shared objects. There are many ways in which global data does not fit well with dynamic linking. Stub objects simply provide one more reason to avoid this practice. It is always better to export all data via a functional interface. You should always hide your data, and make it available to your users via a function that they can call to acquire the address of the data item. However, If you do have to support global data for a stub, perhaps because you are working with an already existing object, it is still easilily done, as shown above. Oracle does not like us to discuss hypothetical new features that don't exist in shipping product, so I'll end this section with a speculation. It might be possible to do more in this area to ease the difficulty of dealing with objects that have global data that the users of the library don't need. Perhaps someday... Conclusions It is easy to create stub objects for most objects. If your library only exports function symbols, all you have to do to build a faithful stub object is to add STUB_OBJECT; and then to use the same link command you're currently using, with the addition of the -z stub option. Happy Stubbing!

    Read the article

  • What change in mindset are needed for a Jave/C# programmer when learning Swift?

    - by Ian
    Swift seem to fit into the same “space” as Java/C# as it was created to make it easier to create end user applications. It is also used to target smart phones like Java/C#. However reading it’s documentation it seems to come from anther universe, you could say it is from Jupiter while C#/Java is from Saturn. As a C# programmer I am finding myself making assumptions that are not true, so what are the conceptual “traps” that I should look out for while leaning about Swift?

    Read the article

  • Announcing Entity Framework Code-First (CTP 5 release)

    In this article, Scott provides a detailed coverage of Entity Framework Code-First CTP 5 release and the features included with the build. He begins with the steps required to install EF Code First. Scott then examines the usage of EF Code First to create a model layer for the Northwind sample database in a series of steps. Towards the end of the article, Scott examines the usage of UI Validation and few addtional EF Code First Improvements shipped with CTP 5.

    Read the article

  • Best practice to query data from MS SQL Server in C Sharp?

    - by Bruno
    What is the best way to query data from a MS SQL Server in C Sharp? I know that it is not good practice to have an SQL query in the code. Is the best way to create a stored procedure and call it from C Sharp with parameters? using (var conn = new SqlConnection(connStr)) using (var command = new SqlCommand("StoredProc", conn) { CommandType = CommandType.StoredProcedure }) { conn.Open(); command.ExecuteNonQuery(); conn.Close(); }

    Read the article

  • Desktop Fun: Big Game Cats Wallpaper Collection Series 2

    - by Asian Angel
    Two years ago we shared a wonderful collection of big game cats wallpapers with you and today we are back with more cattitude goodness for you. Fill your desktop with these sleek and graceful friends from the animal kingdom with the second in our series of Big Game Cats Wallpaper collections. How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

  • Now Shipping! NetAdvantage for .NET 2010 Volume 3!

    The new NetAdvantage Ultimate includes all four Line of Business user interface control sets for ASP .NET, Windows Forms, WPF and Silverlight plus two advanced Data Visualization UI control sets for WPF and Silverlight. With six NetAdvantage products in one robust package, Infragistics® gives you hundreds of controls and infinite development possibilities. Unified XAML Product Strategy-Share Code, Get More Controls In the 10.3 release, Infragistics continues to deliver code parity between the XAML platforms, WPF and Silverlight. In the line of business toolsets, Infragistics introduces the new xamSchedule™, full-featured, Outlook® 2010-style schedule controls, and the new xamDataTree™, a data bound tree view that comfortably handles tens of thousands of tree nodes. Mimicking our Silverlight Drag and Drop Framework, the WPF Drag and Drop Framework CTP empowers you to add your own rich touches to your applications. Track Users' Behaviors New to all NetAdvantage Silverlight controls is the Infragistics Analytics Framework (IGAF), which empowers you to track user behavior in RIAs running on Silverlight 4. Building on the Microsoft® Silverlight Analytics Framework, with IGAF you can analyze the user's behaviors to ensure the experience you want to deliver. NetAdvantage for Windows Forms--New Office® 2010 Ribbon and Application Menu 2010 Create new experiences with Windows Forms. Now with Office 2010 styling, NetAdvantage for Windows Forms has new features such as Microsoft® Office 2010 ribbon and enhanced Infragistics.Excel to export the contents of the high performance WinGrid™ into Microsoft Excel® 2010. The new Windows Message Support enables Infragistics standalone editor controls to process numerous Windows® OS messages, allowing them to respond just like native controls to changes in the Windows environment. Create Faster Web 2.0 Experiences with NetAdvantage for ASP .NET Infragistics continues to push the envelope to deliver the fastest ASP .NET WebForms controls available on the market. Our lightning fast ASP .NET grids are now enhanced with XPS/PDF Exporting and Summary Rows. This release also includes support for jQuery Templating (as a CTP) within our WebDataGrid™ and WebDataTree™ controls allowing you to quickly cut down overall page size. Deliver Business Intelligence with Power, Flexibility and the Office 2010 Experience NetAdvantage for WPF Data Visualization and NetAdvantage for Silverlight Data Visualization help you deliver flexible, powerful and usable end user experiences in Business Intelligence applications. Both suites include the Pivot Grid that delivers the full power of online analytical processing (OLAP) to present multi-dimensional data, sliced and diced in cross-tabulated form for end users to drill down into, interact with and easily extract meaning from the data. Mapping Made Easy 10.3 marks the official release of the WPF Data Visualization xamMap™ control to map anything and everything from geographic to geo-spacial mapping data. Map layers allow you to add successive levels of detail, navigational panes for panning in all directions, color swatch panes that facilitate value scales like Choropleth shading, and scale panes allowing users to zoom-in and out. Both toolsets introduce the first of many relationship maps! With the xamOrgChart™ CTP you can map out organizational charts of up to 50K employees, competitive brackets (think World Cup) and any other relational, organizational map your application needs. http://www.infragistics.com span.fullpost {display:none;}

    Read the article

  • Code Trivia #7

    - by João Angelo
    Lets go for another code trivia, it’s business as usual, you just need to find what’s wrong with the following code: static void Main(string[] args) { using (var file = new FileStream("test", FileMode.Create) { WriteTimeout = 1 }) { file.WriteByte(0); } } TIP: There’s something very wrong with this specific code and there’s also another subtle problem that arises due to how the code is structured.

    Read the article

  • Eclipse has multiple issues after JRE-6 (OpenJDK) upgrade

    - by Eusebius
    I'm on 12.04 LTS, and trying to use Eclipse Indigo. This morning Ubuntu made me update the following packages: Preparing to replace icedtea-6-jre-cacao 6b24-1.11.3-1ubuntu0.12.04.1 (using .../icedtea-6-jre-cacao_6b24-1.11.4-1ubuntu0.12.04.1_amd64.deb) ... Unpacking replacement icedtea-6-jre-cacao ... Preparing to replace openjdk-6-jre-lib 6b24-1.11.3-1ubuntu0.12.04.1 (using .../openjdk-6-jre-lib_6b24-1.11.4-1ubuntu0.12.04.1_all.deb) ... Unpacking replacement openjdk-6-jre-lib ... Preparing to replace icedtea-6-jre-jamvm 6b24-1.11.3-1ubuntu0.12.04.1 (using .../icedtea-6-jre-jamvm_6b24-1.11.4-1ubuntu0.12.04.1_amd64.deb) ... Unpacking replacement icedtea-6-jre-jamvm ... Preparing to replace openjdk-6-jre-headless 6b24-1.11.3-1ubuntu0.12.04.1 (using .../openjdk-6-jre-headless_6b24-1.11.4-1ubuntu0.12.04.1_amd64.deb) ... Unpacking replacement openjdk-6-jre-headless ... Preparing to replace openjdk-6-jre 6b24-1.11.3-1ubuntu0.12.04.1 (using .../openjdk-6-jre_6b24-1.11.4-1ubuntu0.12.04.1_amd64.deb) ... Unpacking replacement openjdk-6-jre ... After that (but I cannot swear it is the root cause), I have the following issues in Eclipse: When trying to launch the simplest HelloWorld program (which behaves fine with manual javac/java), I get either nothing or: An internal error occurred during: "Launching HelloWorld". org/eclipse/jdt/debug/core/JDIDebugModel I get an "Error log" tab in the console panel, with an error: Could not create the view: An unexpected exception was thrown. (Follows a consequent NullPointerException stacktrace between sun.util.calendar.ZoneInfoFile.getZoneIDs(ZoneInfoFile.java:785) and org.eclipse.equinox.launcher.Main.main(Main.java:1386)) When trying to access the Installed JREs part of the preferences, I get a popup saying: Unable to create the selected preference page. An error occurred while automatically activating bundle org.eclipse.jdt.debug.ui (162). And the preference tab says An error has occurred when creating this preference page. Until today I had a manually installed Eclipse (one of the official bundles available on their site), I've tried to replace it by the repository version and I get the same errors. What should I do to make Eclipse work again? Another person reports: Same happened to me after updating last night. Already tried reinstalling Eclipse and Java, starting Eclipse with -clean and starting new workspace and new .eclipse dir, but nothing helps.

    Read the article

  • Can you Trust Search?

    - by David Dorf
    An awful lot of referrals to e-commerce sites come from web searches. Retailers rely on search engine optimization (SEO) to correctly position their website so they can be found. Search on "blue jeans" and the results are determined by a semi-secret algorithm -- in my case Levi.com, Banana Republic, and ShopStyle show up. The NY Times recently uncovered a situation where JCPenney, via third-parties hired to help with SEO, was caught manipulating search results so they were erroneously higher in page rankings. No doubt this helped drive additional sales during this part Christmas. The article, The Dirty Little Secrets of Search, is well worth reading. My friend Ron Kleinman started an interesting discussion at the ARTS Linkedin forum. He posed the question: The ability of a single company to "punish" any retailer (by significantly impacting their on-line sales volume) who does not play by their rules ... is this a good thing or a bad thing? Clearly JCP was in the wrong and needed to be punished, but should that decision lie with Google alone? Don't get me wrong -- I'm certainly not advocating we create a Department of Search where bureaucrats think of ways to spend money, but Google wields an awful lot of power in this situation, and it makes me feel uncomfortable. Now Google is incorporating more social aspects into their search results. For example, when Google knows its me (i.e. I'm logged in when using Google) search results will be influenced by my Twitter network. In an effort to increase relevance, the blogs and re-tweeted articles from my network will be higher in the search results than they otherwise would be. So in the case of product searches, things discussed in my network will rise to the top. Continuing my blue jean example, if someone in my network had been discussing Macy's perhaps they would now be higher in the result set. soapbox: I already have lots of spammers posting bogus comments to this blog in an effort to create additional links to their sites and thus increase their search ranking. Should I expect a similar situation in Twitter and eventually Facebook? Now retailers need to expand their SEO efforts to incorporate social media as well, but do us all a favor and please don't cheat.

    Read the article

  • Advice Needed: Developers blocked by waiting on code to merge from another branch using GitFlow

    - by fogwolf
    Our team just made the switch from FogBugz & Kiln/Mercurial to Jira & Stash/Git. We are using the Git Flow model for branching, adding subtask branches off of feature branches (relating to Jira subtasks of Jira features). We are using Stash to assign a reviewer when we create a pull request to merge back into the parent branch (usually develop but for subtasks back into the feature branch). The problem we're finding is that even with the best planning and breakdown of feature cases, when multiple developers are working together on the same feature, say on the front-end and back-end, if they are working on interdependent code that is in separate branches one developer ends up blocking the other. We've tried pulling between each others' branches as we develop. We've also tried creating local integration branches each developer can pull from multiple branches to test the integration as they develop. Finally, and this seems to work possibly the best for us so far, though with a bit more overhead, we have tried creating an integration branch off of the feature branch right off the bat. When a subtask branch (off of the feature branch) is ready for a pull request and code review, we also manually merge those change sets into this feature integration branch. Then all interested developers are able to pull from that integration branch into other dependent subtask branches. This prevents anyone from waiting for any branch they are dependent upon to pass code review. I know this isn't necessarily a Git issue - it has to do with working on interdependent code in multiple branches, mixed with our own work process and culture. If we didn't have the strict code-review policy for develop (true integration branch) then developer 1 could merge to develop for developer 2 to pull from. Another complication is that we are also required to do some preliminary testing as part of the code review process before handing the feature off to QA.This means that even if front-end developer 1 is pulling directly from back-end developer 2's branch as they go, if back-end developer 2 finishes and his/her pull request is sitting in code review for a week, then front-end developer 2 technically can't create his pull request/code review because his/her code reviewer can't test because back-end developer 2's code hasn't been merged into develop yet. Bottom line is we're finding ourselves in a much more serial rather than parallel approach in these instance, depending on which route we go, and would like to find a process to use to avoid this. Last thing I'll mention is we realize by sharing code across branches that haven't been code reviewed and finalized yet we are in essence using the beta code of others. To a certain extent I don't think we can avoid that and are willing to accept that to a degree. Anyway, any ideas, input, etc... greatly appreciated. Thanks!

    Read the article

  • Django as Python extension?

    - by NoobDev4iPhone
    I come from php community and just started learning Python. I have to create server-side scripts that manipulate databases, files, and send emails. Some of it I found hard to do in python, comparing to php, like sending emails and querying databases. Where in php you have functions like mysql_query(), or email(), in python you have to write whole bunch of code. Recently I found Django, and my question is: is it a good framework for network-oriented scripts, instead of using it as a web-framework?

    Read the article

  • Silverlight Cream for May 20, 2010 -- #866

    - by Dave Campbell
    In this Issue: Mike Snow, Victor Gaudioso, Ola Karlsson, Josh Twist(-2-), Yavor Georgiev, Jeff Wilcox, and Jesse Liberty. Shoutouts: Frank LaVigne has an interesting observation on his site: The Big Take-Away from MIX10 Rishi has updated all his work including a release of nRoute to the latest bits: nRoute Samples Revisited Looks like I posted one of Erik Mork's links two days in a row :) ... that's because I meant to post this one: Silverlight Week – How to Choose a Mobile Platform Just in case you missed it (and for me to find it easy), Scott Guthrie has an excellent post up on Silverlight 4 Tools for VS 2010 and WCF RIA Services Released From SilverlightCream.com: Silverlight Tip of the Day #23 – Working with Strokes and Shapes Mike Snow's Silverlight Tip of the Day number 23 is up and about Strokes and Shapes -- as in dotted and dashed lines. New Silverlight Video Tutorial: How to Fire a Visual State based upon the value of a Boolean Variable Victor Gaudioso's latest video tutorial is up and is on selecting and firing a video state based on a boolean... project included. Simultaneously calling multiple methods on a WCF service from silverlight Ola Karlsson details a problem he had where he was calling multiple WCF services to pull all his data and had problems... turns out it was a blocking call and he found the solution in the forums and details it all out for us... actually, a search at SilverlightCream.com would have found one of the better posts listed once you knew the problem :) Securing Your Silverlight Applications Josh Twist has an article in MSDN on Silverlight Security. He talks about Windows, forms, and .NET authorization then WCF, WCF Data, cross domain and XAP files. He also has some good external links. Template/View selection with MEF in Silverlight Josh Twist points out that this next article is just a simple demonstration, but he's discussing, and provides code for, a MEF-driven ViewModel navigation scheme with animation on the navigation. Workaround for accessing some ASMX services from Silverlight 4 Are you having problems hitting you asmx web service with Silverlight 4? Yeah... others are too! Yavor Georgiev at the Silverlight Web Services Team blog has a post up about it... why it's a sometimes problem and a workaround for it. Using Silverlight 4 features to create a Zune-like context menu Jeff Wilcox used Silverlight 4 and the Toolkit to create some samples of menus, then demonstrates a duplication of the Zune menu. You Already Are A Windows Phone 7 Programmer Jesse Liberty is demonstrating the fact that Silverlight developers are WP7 developers by creating a Silverlight and a WP7 app side by side using the same code... this is a closer look at the Silverlight TV presentation he did. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • All about the Fusion Middleware Best Practice Centers for Applications

    Nishit Rao, Group Product Manager and Markus Zirn, Senior Director, Oracle Fusion Middleware discuss Oracle's Fusion Middlware Best Practice Centers for E-Business Suite, Peoplesoft and Siebel, and how Application Developers can use the how-to guides, blogs and webcasts to learn FMW components and create SOA solutions with their favorite applications.

    Read the article

< Previous Page | 583 584 585 586 587 588 589 590 591 592 593 594  | Next Page >