Search Results

Search found 25377 results on 1016 pages for 'development'.

Page 907/1016 | < Previous Page | 903 904 905 906 907 908 909 910 911 912 913 914  | Next Page >

  • 5 Step Procedure for Android Deployment with NetBeans IDE

    - by Geertjan
    I'm finding that it's so simple to deploy apps to Android that I'm not needing to use the Android emulator at all, haven't been able to figure out how it works anyway (big blinky screen pops up that I don't know what to do with). I just simply deploy the app straight to Android, try it out there, and then uninstall it, if needed. The whole process (only step 4 and 5 below need to be done for each deployment iteration, after you've done steps 1, 2, and 3 once to set up the deployment environment), takes a few seconds. Here's what I do: On Android, go to Settings | Applications. Check "Unknown sources". In "Development", check "USB debugging". Connect Android to your computer via a USB cable. Start up NetBeans IDE, with NBAndroid installed, as described yesterday. and create your "Hello World" app. Right-click the project in the IDE and choose "Export Signed Android Package". Create a new keystore, or choose an existing one, via the wizard that appears. At the end of the wizard (would be nice if NBAndroid would let you set up a keystore once and then reuse it for all your projects, without needing to work through the whole wizard step by step each time), you'll have a new release APK file (Android deployment archive) in the project's 'bin' folder, which you can see in the Files window. Go to the command line (would be nice if NBAndroid were to support adb, would mean I wouldn't need the command line at all), browse to the location of the APK file above. Type "adb install helloworld-release.apk" or whatever the APK file is called. You should see a "Success" message in the command line. Now the application is installed. On your Android, go to "Applications", and there you'll see your brand new app. Then try it out there and delete it if you're not happy with it. After you've made a change in your app, simply repeat step 4 and 5, i.e., create a new APK and install it via adb. Step 4 and 5 take a couple of seconds. And, given that it's all so simple, I don't see the value of the Android emulator, at all.

    Read the article

  • Enterprise with eyes on NoSQL

    - by thegreeneman
    Since joining Oracle a few months back, I have had the fortune of being able to interact with a number of large enterprise organizations and discuss their current state of adoption for NoSQL database technology.   It is worth noting that a large percentage of these organizations do have some NoSQL use and have been steadily increasing their understanding of its applicability for certain data management workloads.   Thru those discussions I’ve learned that it seems one of the biggest issues confronting enterprise adoption of NoSQL databases is the lack of standards for access, administration and monitoring.    This was not so much of an issue with the early adopters of NoSQL technology because they employed a highly DevOps centric approach to application deployment leaving a select few highly qualified developers with the task of managing the production of the system that they designed and implemented. However, as NoSQL technology moves out of the startup and into the hands of larger corporate entities, developers with a broad skill set that are capable of both development and I.T. type production management are in short supply and quickly get moved on to do new projects, often moving to different roles within the company.  This difference in the way smaller more agile startups operate as compared to more established companies is revealing a gap in the NoSQL technology segment that needs to get addressed.    This is one of places that a company such as Oracle has a leg up in the NoSQL Database front.  A combination of having gone thru a past database maturization process,  combined with a vast set of corporate relationships that have grown hand in hand to solve these types of issues, Oracle is in a great place to lead the way in closing the requirements gap for NoSQL technology.  Oracle's understanding of the needs specific to mature organizations have already made their way into the Oracle’s NoSQL Database offering with features such as:  One click cluster deployment with visual topology planning,  standards based monitoring protocols such as SNMP, support for data access for reporting via standard SQL  and integration with emerging standards for data access such as MapReduce.  Given the exciting developments we’re driving in the Oracle NoSQL Database group, I will have a lot more to say about this topic as we move into the second half of the year.

    Read the article

  • Creating a Yes/No MessageBox in a NuGet install/uninstall script

    - by ParadigmShift
    Sometimes getting a little feedback during the install/uninstall process of a NuGet package could be really useful. Instead of accounting for all possible ways to install your NuGet package for every user, you can simplify the installation by clarifying with the user what they want. This example shows how to generate a windows yes/no message box to get input from the user in the PowerShell install or uninstall script. We’ll use the prompt on the uninstall to confirm if the user wants to delete a custom setting that the initial install placed in their configuration.  Obviously you could use the prompt in any way you want. The objects of the message box are generated similar to the controls in the code behind of a WinForm. At the beginning of your script enter this: param($installPath, $toolsPath, $package, $project)   # Set up path variables $solutionDir = Get-SolutionDir $projectName = (Get-Project).ProjectName $projectPath = Join-Path $solutionDir $projectName   ################################################################################################ # WinForm generation for prompt ################################################################################################ function Ask-Delete-Custom-Settings { [void][reflection.assembly]::loadwithpartialname("System.Windows.Forms") [Void][reflection.assembly]::loadwithpartialname("System.Drawing")   $title = "Package Uninstall" $message = "Delete the customized settings?" #Create form and controls $form1 = New-Object System.Windows.Forms.Form $label1 = New-Object System.Windows.Forms.Label $btnYes = New-Object System.Windows.Forms.Button $btnNo = New-Object System.Windows.Forms.Button   #Set properties of controls and form ############ # label1 # ############ $label1.Location = New-Object System.Drawing.Point(12,9) $label1.Name = "label1" $label1.Size = New-Object System.Drawing.Size(254,17) $label1.TabIndex = 0 $label1.Text = $message   ############# # btnYes # ############# $btnYes.Location = New-Object System.Drawing.Point(156,45) $btnYes.Name = "btnYes" $btnYes.Size = New-Object System.Drawing.Size(48,25) $btnYes.TabIndex = 1 $btnYes.Text = "Yes"   ########### # btnNo # ########### $btnNo.Location = New-Object System.Drawing.Point(210,45) $btnNo.Name = "btnNo" $btnNo.Size = New-Object System.Drawing.Size(48,25) $btnNo.TabIndex = 2 $btnNo.Text = "No"   ########### # form1 # ########### $form1.ClientSize = New-Object System.Drawing.Size(281,86) $form1.Controls.Add($label1) $form1.Controls.Add($btnYes) $form1.Controls.Add($btnNo) $form1.Name = "Form1" $form1.Text = $title #Event Handler $btnYes.add_Click({btnYes_Click}) $btnNo.add_Click({btnNo_Click}) return $form1.ShowDialog() } function btnYes_Click { #6 = Yes $form1.DialogResult = 6 } function btnNo_Click { #7 = No $form1.DialogResult = 7 } ################################################################################################ This has also wired up the click events to the form.  This is all it takes to create the message box. Now we have to actually use the message box and get the user’s response or this is all pointless.  We’ll then delete the section of the application/web configuration called <Custom.Settings> [xml] $configXmlContent = Get-Content $configFile   Write-Host "Please respond to the question in the Dialog Box." $dialogResult = Ask-Delete-Custom-Settings #6 = Yes #7 = No Write-Host "dialogResult = $dialogResult" if ($dialogResult.ToString() -eq "Yes") { Write-Host "Deleting customized settings" $customSettingsNode = $configXmlContent.configuration.Item("Custom.Settings") $configXmlContent.configuration.RemoveChild($customSettingsNode) $configXmlContent.Save($configFile) } if ($dialogResult.ToString() -eq "No") { Write-Host "Do not delete customized settings" } The part where I check if ($dialog.Result.ToString() –eq “Yes”) could just as easily check the value for either 6 or 7 (Yes or No).  I just personally decided I liked this way better.   Shahzad Qureshi is a Software Engineer and Consultant in Salt Lake City, Utah, USA His certifications include: Microsoft Certified System Engineer 3CX Certified Partner Global Information Assurance Certification – Secure Software Programmer – .NET He is the owner of Utah VoIP Store at http://www.utahvoipstore.com/ and SWS Development at http://www.swsdev.com/ and publishes windows apps under the name Blue Voice.

    Read the article

  • Create simple jQuery plugin

    - by ybbest
    In the last post, I have shown you how to add the function to jQuery. In this post, I will show you how to create plugin to achieve this. 1. You need to wrap your code in the following construct, this is because you should not use $ directly as $ is global variable, it could have clash with some other library which also use $.Basically, you can pass in jQuery object into the function, so that $ is made available inside the function. (JavaScript use function to create scope, so you can make sure $ is referred to jQuery inside the function ) (function($){ //Your code goes here. }; })(jQuery); 2. Put your code into the construct above. (function ($) { $.getParameterByName = function (name) { name = name.replace(/[\[]/, "\\\[").replace(/[\]]/, "\\\]"); var regexS = "[\\?&]" + name + "=([^&#]*)"; var regex = new RegExp(regexS); var results = regex.exec(window.location.search); if (results == null) return ""; else return decodeURIComponent(results[1].replace(/\+/g, " ")); }; })(jQuery); 3. Now you can reference the code into you project and you can call the method in you JavaScript References: Provides scope for variables Variables are scoped at the function level in javascript. This is different to what you might be used to in a language like C# or Java where the variables are scoped to the block. What this means is if you declare a variable inside a loop or an if statement, it will be available to the entire function. If you ever find yourself needing to explicitly scope a variable inside a function you can use an anonymous function to do this. You can actually create an anonymous function and then execute it straight away and all the variables inside will be scoped to the anonymous function: (function() { var myProperty = "hello world"; alert(myProperty); })(); alert(typeof(myProperty)); // undefined How does an anonymous function in JavaScript work? Building Your First jQuery Plugin A Plugin Development Pattern

    Read the article

  • Remote Graphics Diagnostics with Windows RT 8.1 and Visual Studio 2013

    - by Michael B. McLaughlin
    Originally posted on: http://geekswithblogs.net/mikebmcl/archive/2013/11/12/remote-graphics-diagnostics-with-windows-rt-8.1-and-visual-studio.aspxThis blog post is a brief follow up to my What’s New in Graphics and Game Development in Visual Studio 2013 post on the MVP Award blog. While writing that post I was testing out various features to try to make sure everything worked as expected. I had some trouble getting Remote Graphics Diagnostics (a/k/a remote graphics debugging) working on my first generation Surface RT (upgraded to Windows RT 8.1). It was more strange since I could use remote debugging when doing CPU debugging; it was just graphics debugging that was causing trouble. After some discussions with the great folks who work on the graphics tools in Visual Studio, they were able to repro the problem and recommend a solution. My Surface RT needed the ARM Kits policy installed on it. Once I followed the instructions on the previous link, I could successfully use Remote Graphics Diagnostics on my Surface RT. Please note that this requires Windows RT 8.1 RTM (i.e. not Preview) and that Remote Graphics Diagnostics on ARM only works when you are using Visual Studio 2013 as it is a new feature (it should work just fine using the Express for Windows version). Also, when I installed the ARM Kits policy I needed to do two things to get it to work properly. First, when following the “How to install the Kits policy” instructions, I needed to copy the SecureBoot folder into Program Files on my Surface RT (specifically, I copied the SecureBoot folder to “C:\Program Files\Windows Kits\8.1\bin\arm\” on my Surface RT, creating any necessary directories). It may work if it’s in any system folder; I didn’t test any others after I got it working. I had initially put it in my Downloads folder and tried installing it from there. When the machine restarted it displayed a worrisome error message. I repeatedly pressed the button that would allow me to retry and eventually the machine rebooted and managed to recover itself to its previous state. Second, I needed to install it as an Administrator. The instructions say that this might be necessary. For me it was. This is a Remote Graphics Diagnostics is a great new feature in Visual Studio 2013 so I definitely encourage all of you to check it out!

    Read the article

  • All hail the Excel Queen

    - by Tim Dexter
    An excellent question this past week from dear ol Blighty; actually from Brian at Nextgen Clearing Ltd in the big smoke (London). Brian was developing an excel template and wanted to be able to reference the data fields multiple times inside the Excel template. Damn good question and I of course has some wacky solutions, from macros and cell referencing in Excel to pre-processing the data with an XSL stylesheet to copy the data multiple times so it could be referenced multiple times. All completely outlandish, enter our Queen of Excel, Shirley from the development team. Shirley is singlehandedly responsible for the Excel templates, I put her through six months of hell a few years back, with a host of Excel template requirements. She was more than up to the challenge and has developed some great features. One of those, is the ability to use the hidden XDO_METADATA sheet to map the data to custom named fields so they can be used multiple times in the template. So simple and very neat! Excel template and regular Excel users will know that you can only use the naming function once ie the names have to be unique across the workbook so you can not reuse a cell/group name. To get around this you can just come up with as many cell names as you want and map them in the XDO_METADATA sheet to the data columns/fields in your XML data set:. For example: XDO_?DEPTNO_SUMMARY?  <?DEPTNO?> XDO_?DNAME_SUMMARY?  <?DNAME?> XDO_GROUP_?G_D_DETAIL? <xsl:for-each-group select=".//G_D" group-by="./DEPTNO"> XDO_?DEPTNO_DETAIL? <?DEPTNO?> As you can see DEPTNO has been referenced twice and mapped to different named values in the left hand column. These values can then be used to name individual cells in the Excel template. You'll also notice a mix of Publisher <? ...?> and native XSL commands. So the world is your oyster on the mapping and the complexity you might need for calculations or string manipulation. Shirley has kindly built out a sample Excel template, data and result here so you can see how it all hangs together. the XDO_METADATA sheet is hidden, just right click on the sheet names and use the Unhide command to show it.

    Read the article

  • LINQ to Twitter v2.1.09 Released

    - by Joe Mayo
    Originally posted on: http://geekswithblogs.net/WinAZ/archive/2013/10/15/linq-to-twitter-v2.1.09-released.aspxToday, I released LINQ to Twitter v2.1.09. Here are important new changes. Bug Fixes This is primarily a bug fix release. Most notably, there were authentication problems in WinRT apps. This is now fixed. New Features One new feature is the addition of ApplicationOnlyAuthentication for WinRT. It is fully async.  Here’s how it works: var auth = new WinRtApplicationOnlyAuthorizer { Credentials = new InMemoryCredentials { ConsumerKey = "", ConsumerSecret = "" } }; if (auth == null || !auth.IsAuthorized) { await auth.AuthorizeAsync(); } var twitterCtx = new TwitterContext(auth); (from search in twitterCtx.Search where search.Type == SearchType.Search && search.Query == SearchTextBox.Text select search) .MaterializedAsyncCallback( async response => await Dispatcher.RunAsync( CoreDispatcherPriority.Normal, async () => { Search searchResponse = response.State.Single(); string message = string.Format( "Search returned {0} statuses", searchResponse.Statuses.Count); await new MessageDialog(message, "Search Complete").ShowAsync(); })); It’s called the WinRtApplicationOnlyAuthorizer. You only need two tokens, ConsumerKey and ConsumerSecret, which come from your Twitter API application settings page. Note: You need a Twitter Application, which you can create at https://dev.twitter.com/. The MaterializedAsyncCallback materializes your query and handles the response. I put everything together in a lambda for demonstration purposes, but you can always replace the callback with a handler of type Action<TwitterAsyncResponse<IEnumerable<T>>>, where T is Search for this example. On the Horizon The next version of LINQ to Twitter is in development. I discussed it at LINQ to Twitter Async. This isn’t complete, but you can download the source code at the LINQ to Twitter site on CodePlex. I’ve competed all the spikes for what I thought would be the hard parts and now have prototypes of queries and commands working. This would be a good time to provide feedback if there are features in the current version that you think could be improved. The current driving forces for the next version will be async and PCL.   @JoeMayo

    Read the article

  • Taking a Chomp out of a (Social Network) Product Hype

    - by kellsey.ruppel
    Andrew Kershaw, Senior Director Oracle Social Network Product Development, speaks about Oracle Social Network One of our competitors is being very aggressive with its own developed Social Network add-on, but there should be no doubt in the minds that the Oracle social capabilities available with Fusion CRM stack up well against it. Within the Oracle Cloud, we have announced a product called Oracle Social Network. That technology is pre-integrated into Fusion Applications, enabling your customer to build a collaborative and social enterprise (without all the noise!). Oracle Social Network is designed together with our Fusion Applications. It is very conveniently pre-integrated with CRM, HCM, Financials, Projects, Supply Chain, and the Fusion family. But what's even better is that the individual teams can take a considered approach to what they are trying to achieve within the collaboration process and the outcome they are trying to enable. Then they can utilize the network and collaboration tools to support that result. And there's more! The Fusion teams can design social interactions that bridge across and outside their individual product lines because we have more than just a product line and they know they have the social network to connect them. I know we have a superior product, but it is our ability to understand and execute across the enterprise that will enable us to deliver a much more robust and capable platform in the short term than our competitor can. We have built a product specifically designed for enterprise social collaboration which is not the same for the competition. We have delivered a much more effective solution - one in which individuals can easily collaborate to get results, while being confident that they know who has access to their information. Our platform has been pre-built to cross the company boundaries and enable our customers to collaborate, not just with their customers, but with their partners and suppliers as well. So Fusion addresses the combination of the enterprise application suite with enterprise collaboration and social networking. Oracle Social Network already has a feature function advantage over our competitor's tool providing a real added value to the employees. Plus Oracle has the ability to execute in a broad enterprise and cross-enterprise way that our competitors cannot. We have the power of a tool that provides the core social fabric across all of the applications, as well as supporting enterprise collaboration. That allows us to provide intelligent business insight, connections, and recommendations that our competitor simply can't. From our competitors, customers get integration for Sales; they get integration for Service, but then they have to integrate every other enterprise asset that they have by themselves. With Oracle, we are doing the integration. Fusion Applications will be pre-integrated, and over time, all of the applications in the business suite, including our Applications Unlimited and specialist industry applications, will connect to the Oracle Social Network. I'm confident these capabilities make Oracle Social Network the only collaboration platform on which to deliver the social enterprise.

    Read the article

  • PHP - Internal APIs/Libraries - What makes sense?

    - by Mark Locker
    I've been having a discussion lately with some colleagues about the best way to approach a new project, and thought it'd be interesting to get some external thoughts thrown into the mix. Basically, we're redeveloping a fairly large site (written in PHP) and have differing opinions on how the platform should be setup. Requirements: The platform will need to support multiple internal websites, as well as external (non-PHP) projects which at the moment consist of a mobile app and a toolbar. We have no plans/need in the foreseeable future to open up an API externally (for use in products other than our own). My opinion: We should have a library of well documented native model classes which can be shared between projects. These models will represent everything in our database and can take advantage of object orientated features such as inheritance, traits, magic methods, etc. etc. As well as employing ORM. We can then add an API layer on top of these models which can basically accept requests and route them to the appropriate methods, translating the response so that it can be used platform independently. This routing for each method can be setup as and when it's required. Their opinion: We should have a single HTTP API which is used by all projects (internal PHP ones or otherwise). My thoughts: To me, there are a number of issues with using the sole HTTP API approach: It will be very expensive performance wise. One page request will result in several additional http requests (which although local, are still ones that Apache will need to handle). You'll lose all of the best features PHP has for OO development. From simple inheritance, to employing the likes of ORM which can save you writing a lot of code. For internal projects, the actual process makes me cringe. To get a users name, for example, a request would go out of our box, over the LAN, back in, then run through a script which calls a method, JSON encodes the output and feeds that back. That would then need to be JSON decoded, and be presented as an array ready to use. Working with arrays, as appose to objects, makes me sad in a modern PHP framework. Their thoughts (and my responses): Having one method of doing thing keeps things simple. - You'd only do things differently if you were using a different language anyway. It will become robust. - Seeing as the API will run off the library of models, I think my option would be just as robust. What do you think? I'd be really interested to hear the thoughts of others on this, especially as opinions on both sides are not founded on any past experience.

    Read the article

  • MySQL Workbench 6.2.1 BETA has been released

    - by user12602715
    The MySQL Workbench team is announcing availability of the first beta release of its upcoming major product update, MySQL  Workbench 6.2. MySQL Workbench 6.2 focuses on support for innovations released in MySQL 5.6 and MySQL 5.7 DMR (Development Release) as well as MySQL Fabric 1.5, with features such as: A new spatial data viewer, allowing graphical views of result sets containing GEOMETRY data and taking advantage of the new GIS capabilities in MySQL 5.7. Support for new MySQL 5.7.4 SQL syntax and configuration options. Metadata Locks View shows the locks connections are blocked or waiting on. MySQL Fabric cluster connectivity - Browsing, view status, and connect to any MySQL instance in a Fabric Cluster. MS Access migration Wizard - easily move to MySQL Databases. Other significant usability improvements were made, aiming to raise productivity for advanced and new users: Direct shortcut buttons to commonly used features in the schema tree. Improved results handling. Columns have better auto-sizing and their widths are saved. Fonts can also be customized. Results "pinned" to persist viewing data. A convenient Run SQL Script command to directly execute SQL scripts, without loading them first. Database Modeling has been updated to allow changes to the formatting of note objects and attached SQL scripts can now be included in forward engineering and synchronization scripts. Integrated Visual Explain within the result set panel. Visual Explain drill down for large to very large explain plans. Shared SQL snippets in the SQL Editor, allowing multiple users to share SQL code by storing it within a MySQL instance. And much more. The list of provided binaries was updated and MySQL Workbench binaries now available for: Windows 7 or newer Mac OS X Lion or newer Ubuntu 12.04 LTS and Ubuntu 14.04 Fedora 20 Oracle Linux 6.5 Oracle Linux 7 Sources for building in other Linux distributions For the full list of changes in this revision, visit http://dev.mysql.com/doc/relnotes/workbench/en/changes-6-2.html For discussion, join the MySQL Workbench Forums: http://forums.mysql.com/index.php?151 Download MySQL Workbench 6.2.1 now, for Windows, Mac OS X 10.7+, Oracle Linux 6 and 7, Fedora 20, Ubuntu 12.04 and Ubuntu 14.04 or sources, from: http://dev.mysql.com/downloads/tools/workbench/ On behalf of the MySQL Workbench and the MySQL/ORACLE RE Team.

    Read the article

  • Innovation for Retailers

    - by David Dorf
    One of my main objectives for this blog is to point out emerging technologies and how they might apply to the retail industry.  But ideas are just the beginning; retailers either have to rely on vendors or have their own lab to explore these ideas and see which ones work.  (A healthy dose of both is probably the best solution.)  The Nordstrom Innovation Lab is a fine example of dedicating resources to cultivate ideas and test prototypes. The video below, from 2011, is a case study in which the team builds an iPad app that helps customers purchase sunglasses in the store.  Customers take pictures of themselves wearing different sunglasses, then can do side-by-side comparisons. There are a few interesting take-aways from their process.  First, they are working in the store alongside employees and customers.  There's no concept of documenting all the requirements then building the product.  Instead, they work closely with those that will be using the app in order to fully understand what's needed.  When they find an issue, they change the software onsite and try again.  This iterative prototyping ensures their product hits the mark.  Feels like Extreme Programming if you recall that movement. Second, they have time-boxed the project to one week.  Either it works or it doesn't, and either way they've only expended a week's worth of resources.  Innovation always entails failure, and those that succeed are often good at detecting failure quickly then adjusting.  Fail fast and fail often. Third, its not always about technology.  I was impressed they used paper designs to walk through user stories and help understand the needs of the customer.  Pen and paper is the innovator's most powerful tool. Our Retail Applied Research (RAR) team uses some of these concepts in our development process.  (Calling it a process is probably overkill.)  We try to give life to concepts quickly so the rest of organization can help us decide if we're heading the right direction.  It takes many failures before finding a successful product.

    Read the article

  • Weird Ubuntu Desktop Boot Partition On External Hard Drive

    - by Magnitus
    I have a Thinkpad with Windows 7. Last time I installed an Ubuntu/Windows dual boot, Windows was never same after and regularly got corrupted so this time, I installed Ubuntu on a separate external hard drive. I took a 500 GB external hard drive and used Windows to shrink the partition on it to 400 GB, freeing 100 GB to install Ubuntu. Then I modified the booting priority of my computer to boot from the external hard drive if present. Then, I installed Ubuntu desktop on the external hard drive using a DVD, picked the most simplistic partitioning scheme I could get away with (didn't go auto as it didn't include the external hard drive as a choice) and voilà. Fast forward some time and I'm trying to refresh my understanding of Linux partitions to install a bunch of servers, so I'm looking at the current partitioning scheme on my external hard drive and find the boot partition puzzling... sda is my integrated hard drive with Windows 7. sdb is my Ubuntu desktop external hard drive. Running parted on sdb, I get this: (parted) print Model: WD My Passport 0740 (scsi) Disk /dev/sdb: 500GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 393GB 393GB primary ntfs boot 2 393GB 500GB 107GB extended 5 393GB 425GB 32.8GB logical linux-swap(v1) 6 425GB 500GB 74.6GB logical ext4 At this point, I'm wondering why the ntfs partition is flagged as "boot" and not my ext4 partition which is the partition that contains / (and by extension, /boot since it's not on its own separate partition). Looking at mtab only confirms what I already know: eric@eric-ThinkPad-W530:~$ sudo cat /etc/mtab /dev/sdb6 / ext4 rw,errors=remount-ro 0 0 proc /proc proc rw,noexec,nosuid,nodev 0 0 sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0 none /sys/fs/cgroup tmpfs rw 0 0 none /sys/fs/fuse/connections fusectl rw 0 0 none /sys/kernel/debug debugfs rw 0 0 none /sys/kernel/security securityfs rw 0 0 udev /dev devtmpfs rw,mode=0755 0 0 devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0 tmpfs /run tmpfs rw,noexec,nosuid,size=10%,mode=0755 0 0 none /run/lock tmpfs rw,noexec,nosuid,nodev,size=5242880 0 0 none /run/shm tmpfs rw,nosuid,nodev 0 0 none /run/user tmpfs rw,noexec,nosuid,nodev,size=104857600,mode=0755 0 0 none /sys/fs/pstore pstore rw 0 0 systemd /sys/fs/cgroup/systemd cgroup rw,noexec,nosuid,nodev,none,name=systemd 0 0 gvfsd-fuse /run/user/1000/gvfs fuse.gvfsd-fuse rw,nosuid,nodev,user=eric 0 0 /dev/sdb1 /media/eric/My\040Passport fuseblk rw,nosuid,nodev,allow_other,default_permissions,blksize=4096 0 0 My lack of understanding concerning this is not vital to anything (this is only my development desktop partition), but somehow annoys me. Any insight that could shed some light on this would be welcome.

    Read the article

  • How to be successfull at BDD Specifications Workshops?

    - by sigo
    Today we tried to introduce BDD in our software development process by having a specification workshop. For this workshop we had 2 developers, 1 tester and 1 business analyst. The workshop lasted 1h30 and by the end of it we managed to figure out some BDD scenarios for our new feature. We tried to focus on finding the scenarios that we could miss, and the difficult ones. At the end of the workshop some people were actually unhappy with the workshop. One developer felt he wasted his time as he was used to be given out the scenarios directly by the business analyst and review them with her. The business analyst didn't feel confident with our scenario coverage (Had a feeling that we could have missed out other important stuff) but more importantly felt that this workshop was also a waste of time as she could have figured out all these scenarios by herself and in a shorter period of time. So my question is how that kind of workshop can actually work. In the theory, given you have a new feature to develop, you put the tree 'amigos' (dev/tester/ba) in the same room so that they can collaborate together on writing the differents requirements for the new feature using examples. I can see all the benefits from that. Specially in term of knowledge sharing and common product/end goal/done vision. But in practice, we still think it is more cost effective to first have a BA to work on his own on the examples and only then to have the scenarios to be reviewed/reworked by the 3 'amigos'. By having the BA to work on his own, we actually feel more confident that we are less going to miss out stuff + we still get to review the scenarios afterward to double check. We don't think than simple brainstorming/deliberate discovery is actually enought to seriously cover all the requirement for a feature. The business analyst is actually the best person for that kind of stuff. The thing we just do is to review what she wrote and see if then we have a common understanding (which could then lead to rewrite some of her scenarios or add new ones she could have missed). This workshop lasted 1h30, and by the end of it, we didn't feel confident enought about wha we did...sure we could have spent more time on it but honestly most people get exhausted after 1h30 of brainstorming. So how can you get that to work effectively in practice ?

    Read the article

  • JavaOne Session Report - Java ME SDK 3.2

    - by Janice J. Heiss
    Oracle Product Manager for Java ME SDK, Sungmoon Cho, presented a session, "Developing Java Mobile and Embedded Applications with Java ME SDK 3.2,” wherein he covered the basic new features of the Java ME Platform SDK 3.2, a state-of-the-art toolbox for developing mobile and embedded applications. The session began with a summary of the four main components of Java ME SDK. A device emulator allows developers to quickly run and test applications before commercialization. It supports CLDC/MIDP CLDC/IMP.NG and CLC/AGUI. A development environment assists writing, running debugging and deploying and enables on-device debugging. Samples provide developers with useful codes and frameworks. IDE Plugins – NetBeans and Eclipse – equip developers with CPU Profiler, Memory Monitor, Network Monitor, and Device Selector. This means that manual integration is no longer necessary. Cho then talked about the Java ME SDK’s on-device tooling architecture: * Java ME SDK provides an architecture ideal for on-device-debugging.* Device Manager plays the central role by managing different devices whether it is the emulator or a device that Oracle provides or recommends or a third party device as long as the devices have a Java Runtime that supports the protocol that is designated.* The Emulator provides an accurate emulation, since it uses the same code base used in Oracle’s Java ME runtime.* The Universal Emulator Interface (UEI) makes it easy for IDEs to detect the platform.He then focused on the Java ME SDK release highlights, which include: * Implementation and support for the new Oracle® Java Wireless Client 3.2 runtime and the Oracle® Java ME Embedded runtime. A full emulation for the runtime is provided.* Support for JSR 228, the Information Module Profile-Next Generation API (IMP-NG). This is a new profile for embedded devices. * A new Custom Device Skin Creator.* An Eclipse plugin for CLDC/MIDP.* Profiling, Network monitoring, and Memory monitoring are now integrated with the NetBeans profiling tools.* Java ME SDK Update CenterCho summarized the main features: IDE Integration (NetBeans and Eclipse) enables developers to write, run, profile, and debug their applications on their favorite IDE. CPU ProfilerThis enables developers to more quickly detect the hot spot and where CPU time is being used. They can double click the method to jump directly into the source code.Memory Monitor Developers can monitor objects and memory usage in real time.Debugger on the Emulator and DeviceDevelopers can run their applications step by step, and inspect the variables to pinpoint the problem. The debugging can take place either on the emulator or the device.Embedded Application DevelopmentIMP-NG, Device Access, Logging, and AMS API Support are now available.On-Device ToolingConnect your device to your computer, and run and debug the application right on your device.Custom Device Skin CreatorDefine your own device and test on an environment that is closest to your target device. The informative session concluded with a demo that showed more concretely how to apply the new features in Java ME SDK 3.2.

    Read the article

  • Ubuntu 12.4.1 failing in vm both Vbox and Vmware on new HP Envy 4t-1000

    - by Chas
    Brand new to Linux, getting frustrated trying to get an environment up with Ubuntu. My primary goal is to learn Linux and Apache/PHP development. I need to keep my Windows OS as main on my machine for work, so i'm trying to virtualize Ubuntu 12.4.1 without luck (many attempts). I have a new HP Envy 4t-1000 with 16gb ram, and 32 gb ssd caching with 500gb spindle hard drive. Graphics card is an Intel HD 3000 with AMD Radeon 7670M. With installing Ubuntu desktop in VBox, I'm getting this result: https://forums.virtualbox.org/viewtopic.php?f=6&t=51939 With VMware workstation 7 (patched), I complete the install of Ubuntu, it reboots, purple desktop briefly flashes then it drops to command line. I bought a beginning Ubuntu book, and it recommends trying to manually configure graphics if this happens. So I tried doing a safe boot holding shift - I get to the first screen (GRUB) loads fine, and I choose recovery mode. After choosing the recovery mode, I get the recovery mode options, and can arrow down to what the book suggests 'Run in fail safe graphic mode.' Once I select this option, I get a black screen with a large white dialogue box, at the top it says "The system is running in low-graphics mode. Your screen, graphics card, and input device settings could not be detected correctly. You will need to configure these yourself." Then there is an ok button way down at the bottom. When I select 'ok' I get a menu for a few options, book recommended 'reconfigure graphics.' When I try this, I get a menu of two options: 1) "Use generic (default) configuration or 2) use backup. I've tried both options several times, hitting ok just refreshes screen and nothing more. Rebooting at this point just goes back to command line as before. I don't know what to do at this point, I've spent too many hours this weekend trying in both VBox and VMware to get Ubuntu going. Isn't there like a very basic graphic display or something I can use to at least get into the desktop? I explored the GRUB some more, and tried to look at the startup and xserver logs - both are blank. No help there I guess? When I try to choose 'Edit the configuration file, then 'ok' screen just refreshes on same menu options, nothing happens. thx for any advice. I really need to focus on learning Linux, Apache and PHP, so perhaps Ubuntu just won't work on my hardware? Any other suggestions? I will need to virtualize - THANKS for any help/advice.

    Read the article

  • Is IE9 a modern browser?

    - by anirudha
    Is IE9 a modern browser? i show you a post who compare IE as well as they compare chrome. Is IE9 a modern browser? well Are you thing that they make something bettter then thing another that what Firefox and chrome do whenever Microsoft do this. that's point that Microsoft always blast IE because they make version upon version not upon update like Firefox make 3.6.14 after 3.6.13 and chrome give update soon as possible but IE not come soon. they will thing for making 10 instead of giving update on 9. well what they tell us new. they make fool public everytime i believe they make fool public as same as today they maked for version 9 they show developer tool in 9 have three new tabs or pael are this enough. whenever  in chrome and firefox their is many plugin who make development easier IE still have a developer tool who not have enough power like Firebug in Firefox. they show performance but forget luna user [window xp] and their IE never runs on other plateform but chrome and firefox can. no customization in IE whenever chrome and firefox have uncountable plugin and addons. show features now in IE who already implemented in firefox very early. well their is no rule that static goes right every time not sure that IE and Firefox both are right in their language. their is no one predict what thing goes better in future. so well keep a thing in mind never wasted time and also thing to make task easier even you need to use sollution opensource or closesoure inside or outside MS does not matter. well everyone tell you much more then they do even IE and some other. they never tell you this thing not in IE but in another can be found they never tell you use other whenever you need a thing and never can be found in their software. you need to more beware of IE because they make them commorcial not really for public if really then why they stop wxp user to use them as well firefox and chrome never force. because they need a thing that force more then more copy of windows sale. so they thing to add a thing in window 8. the IE9 they thing to make before that they thing to make  this for windows 8. they always force user to purchase this for this. this for this. and this trick sell their software. well outside MS Mozilla and Chrome all behave better with user and their feedback. they respect user their privacy and feedback. if you not believe that how much problem you found in IE and they got solved soon as in chrome and firefox bug kill soon. because IE not opensource we need to  boycutt them secondly their is no customization then they make user task easier even with twitter and facebook.

    Read the article

  • Deploying, but without those pesky test files!

    - by Chris Skardon
    Silverlight testing is great, we all know that (don’t we??), we’re expected to do it as part of the development process, but once we’ve got an awesome application written and we come to deploy it, we don’t want the test files going out with it… You might be like me, have the files in a Web project – let’s face it, that’s how we’re pushed into doing it… So let’s stick with it! Now. I’m deploying via the wonders of the Web Deployment shizzle, but this also applies to the classic ‘installer’ project as well.. Baaaasically, we’re going to use the ‘Debug’ / ‘Release’ configurations to include given files. ?? OK, you know in the top of your visual studio editor, you (usually) have a drop down which predominantly reads ‘Debug’? Those are ‘configurations’. Mostly we don’t bother changing it, primarily due to laziness, but also the fact that we generally don’t see ‘Release’ as actually doing anything other than making it harder to find problems :) Well today my friends we’re going to change that bad boy… The next few steps are just helping you set up a new ‘Debug’ configuration, but you can just switch to the ‘Release’ configuration and skip to the end… First let’s go to the Configuration Manager. There are multiple ways, through the ‘Build’ menu (at the bottom), or via the drop down which currently has ‘Debug’ in it :) Got it? Select ‘New’ from the ‘Active solution configuration’ drop down: Create a new configuration, kind of like the picture below shows (or for those graphically challenged – Name: DebugWithNoTests, and Copy settings from: ‘Debug’, ensuring the ‘Create new project configurations’ checkbox is checked). Press OK. VS will do some shizzle, and in the Configuration manager, you will see pretty much exactly what you did before, only with ‘Debug’ replaced with ‘DebugWithNoTests’. Turn off the build options for the test projects. We won’t need them.. IF you skipped down from the top, this is where you’ll be wanting to stop!!! Close and now we’re one notepad step away from achieving our goals. Yes, I said notepad. You can’t do what we’re going to do in VS. (Pity). Go to the folder where your web project is, and right click on the ‘.csproj’ file. Now open it with notepad. Head on down to the ‘<Content Include’ bits, they’ll look like this: <ItemGroup> <Content Include="ClientBin\Tests.xap" /> ... </ItemGroup> Take this and modify each of the files you don’t want deployed and change to: <Content Include="ClientBin\Tests.xap" Condition="'$(Configuration)' == 'Debug'" /> Once you’ve got that sorted publish your project, once with the Debug configuration selected, and another with any other configuration (‘Release’, ‘DebugWithNoTests’ etc).. No files! Huzzah!

    Read the article

  • PBCS Hyperion Planning in the Cloud Implementation Workshop

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 Oracle Planning and Budgeting Cloud Service (PBCS) opens up opportunities for organizations of all sizes to streamline planning and forecasting, accelerate deployment, and reduce costs. This one-day in-person workshop is delivered by Oracle Development (free to OPN member partners), and will cover the handoff from selling-to-implementing of PBCS. Although the basic building blocks are the same as with on-premises Planning, there is a paradigm shift when it comes to selling and implementing a Cloud Service solution. The value proposition behind Oracle Planning and Budgeting Cloud Service is all about the deployment model, how it’s sold and how it gets implemented – simplicity, fast adoption and flexible deployment, without sacrificing first-class functionality. To be successful, the entire cycle from sales to implementation should consistently support this value proposition to your clients. This training event is for OPN member partners whose business roles involve presales, implementation consulting, and support. This workshop briefly reviews the sales approach, as background, with emphasis on partner sales support. The main objective is to learn what is needed to successfully implement Oracle Planning and Budgeting Cloud Service once the sales hand off is made – how to leverage your current Hyperion Planning knowledge and use the features designed specifically to build out a Cloud Service solution. This Workshop is being offered at three locations for partners from all countries in EMEA: June 24, 2014: Kista, Sweden June 26, 2014: Reading, United Kingdom June 29-30, 2014 (split days): Dubaï, United Arab Emirates To get more information, to check pre-requisites, and to register, click here. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Changing Your Design for Testability

    Sometimes I come across a way of putting something that it is pithy good, not Hallmark trite, but an impactful and concise way of clarifying a previously obscure concept. A recent one of these happy occurrences was when I was reading the excellent Art of Unit Testing by Roy Osherove. After going through the basics of why youd want to test code and how to do it, Roy confronts a frequent objection to having unit tests, that it ends up changing how you design your components: When we write unit tests for our code, we are adding another end user (the test) to the object model. That end user is just as important as the original one, but it has different goals when using the model.  The test has specific requirements from the object model that seem to defy the basic logic behind a couple of object-oriented principles, mainly encapsulation. [emphasis added by me] When I read this, something clicked for me. I used to find it persuasive that because unit tests caused you to change your design they were more disruptive than they were worth. The counter argument I heard is that the disruption was OK, because testable design was just obviously better. That argument was not convincing as it seemed like delusional arrogance to suggest that any one of type of design was just inherently better for the particular applications I was building. What was missing was that I was not thinking of unit tests as an additional and equal end user to my design. If I accepted that proposition, than it was indeed obvious that a testable design was better because now all users of my component would be satisfied. Have I accepted that proposition? Id phrase it slightly different. I find more and more that having unit tests helps me write better, less buggy code before it gets to production or QA. As I write more unit tests, it gets easier to see how to create testable components, so I dont feel like its taking me as much extra time up front. I pick and choose components that seem most likely to benefit from automated tests and it is working out nicely. If you already implement Test Driven Development, this whole post was probably a waste of your time <g> If you hate the idea of unit tests, well, probably not a great value prop for you either. However, if you are somewhere in between, at least take a minute and check out a sample chapter from Roys book at: http://www.manning.com/osherove/.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Using all Ten IO slots on a 7420

    - by user12620172
    So I had the opportunity recently to actually use up all ten slots in a clustered 7420 system. This actually uses 20 slots, or 22 if you count the clusteron card. I thought it was interesting enough to share here. This is at one of my clients here in southern California. You can see the picture below. We have four SAS HBAs instead of the usual two. This is becuase we wanted to split up the back-end taffic for different workloads. We have a set of disk trays coming from two SAS cards for nothing but Exadata backups. Then, we have a different set of disk trays coming off of the other two SAS cards for non-Exadata workloads, such as regular user file storage. We have 2 Infiniband cards which allow us to do a full mesh directly into the back of the nearby, production Exadata, specifically for fast backups and restores over IB. You can see a 3rd IB card here, which is going to be connected to a non-production Exadata for slower backups and restores from it.The 10Gig card is for client connectivity, allowing other, non-Exadata Oracle databases to make use of the many snapshots and clones that can now be created using the RMAN copies from the original production database coming off the Exadata. This allows for a good number of test and development Oracle databases to use these clones without effecting performance of the Exadata at all.We also have a couple FC HBAs, both for NDMP backups to an Oracle/StorageTek tape library and also for FC clients to come in and use some storage on the 7420.  Now, if you are adding more cards to your 7420, be aware of which cards you can place in which slots. See the bottom graphic just below the photo.  Note that the slots are numbered 0-4 for the first 5 cards, then the "C" slots which is the dedicated Cluster card (called the Clustron), and then another 5 slots numbered 5-9. Some rules for the slots: Slots 1 & 8 are automatically populated with the two default SAS cards. The only other slots you can add SAS cards to are 2 & 7. Slots 0 and 9 can only hold FC cards. Nothing else. So if you have four SAS cards, you are now down to only four more slots for your 10Gig and IB cards. Be sure not to waste one of these slots on a FC card, which can go into 0 or 9, instead.  If at all possible, slots should be populated in this order: 9, 0, 7, 2, 6, 3, 5, 4

    Read the article

  • How / Where can I host my Java web application? [closed]

    - by Huliax
    Possible Duplicate: How to find web hosting that meets my requirements? In case you want to skip to the crux of my inquiry just read the bold type. I just finished my CS degree (at 39 years old :-)). For my final project I designed and built a system that can provide local positioning / location awareness to mobile wifi devices (only have Android client thus far). The server receives data from clients, processes it, and responds to the clients with a messages containing information about their respective locations. I would like to continue the project (perhaps release as open source but that is a different discussion). Thus far my server application has been running on the CS department's hardware where I could pretty much do whatever I wanted. I'm getting kicked off that system in a few weeks so I have to find a new home for my server application. I need a host that will let me run my Java server (along w/ mySQL db) -- preferably on the cheap since I haven't yet got a job. I have very little experience with the "real world" of web development / hosting. I'm having trouble figuring out what kind of hosting service will let me run my application as is. If that turns out to be a tall order then I need to know what my options are for changing thing so that I can get up and running with some hosting. As an aside, I'm also researching whether or not I should rewrite this in a different language. Trying to figure out if there is a substantially better (for whatever reason) one for what I'm doing. This might also potentially have a bearing on my hosting needs. One possibility is to write the server in something more widely accepted by hosting services. I have been searching for answers to my question and haven't found quite what I'm looking for. Part of the problem might be that I don't know exactly what terminology to use. If there is a good answer to this question elsewhere please feel free to point me towards it. Thanks for help / advice.

    Read the article

  • ADF Mobile Released!!

    - by Denis T
    ADFmfAnnounce We are pleased to announce the general availability of the newest version of Oracle’s ADF Mobile framework. This new framework provides the much anticipated on-device capabilities that the latest mobile applications require.  Feature Highlights Java - Oracle brings a Java VM embedded with each application so you can develop all your business logic in the platform neutral language you know and love! (Yes, even iOS!) JDBC - Since we give you Java, we also provide JDBC along with a SQLite driver and engine that also supports encryption out of the box. Multi-Platform - Truly develop your application only once and deploy to multiple platforms. iOS and Android platforms are supported for both phone and tablet. Flexible - You can decide how to implement the UI: (a) Use existing server-based UI framework like JSF. (b) Use your own favorite HTML5 framework like JQuery. (c) Use our declarative HTML5 component set provided with the framework. ADF Mobile XML or AMX for short, provides all the normal input and layout controls you expect and we also add charts/maps/gauges along with it to provide a very comprehensive UI controls. You can also mix and match any of the three for ultimate flexibility! Device Feature Access - You can get access to device features from either Java or JavaScript to invoke features like camera, GPS, email, SMS, contacts, etc. Secure - ADF Mobile provides integrated security that works with your server back-end as well. Whether you’re using remote URLs, local HTML or AMX, you can secure any/all of your features with a single consistent login page. Since we also give you SQLite encryption, we are assured that your data is safe. Rapid - Using the same development techniques that ADF developers are already used to, you can quickly create mobile applications without ever learning another language! Architecture ADF Mobile is a “hybrid” architecture that employs a natively built “container” on each platform that hosts a number of browser windows that are used to display the application content. We add the Java VM as a natively built library to the container for business logic.   How To Get Started ADF Mobile is an extension to the recently released JDeveloper version 11.1.2.3.0. Simple get the latest JDeveloper from Oracle Technology Network and use the Check for Updates feature to get the ADF Mobile extension. Note: ADF Mobile does not require developers to learn any other languages or frameworks but to build/deploy to iOS, you must be on an Apple MacintoshTM and have Xcode installed. To build/deploy to Android™ you must have the Android SDK installed.

    Read the article

  • "Expecting A Different Result?" (2 of 3 in 'No Customer Left Behind' Series)

    - by Kathryn Perry
    A guest post by David Vap, Group Vice President, Oracle Applications Product Development Many companies already have some type of customer experience initiative in process or one that could be framed as such. The challenge is that the initiatives too often are started in a department silo, don't have the right level of executive sponsorship, or have been initiated without the necessary insight and strategic business alignment. You can't keep doing the same things, give it a customer experience name, and expect a different result. You can't continue to just compete on price or features - that is not sustainable in commoditized markets. And ultimately, investing in technology alone doesn't solve customer experience problems; it just adds to the complexity of them. You need a customer experience strategy and approach on how to execute a customer-centric worldview within your business. To develop this, you must take an outside in journey on how your customers are interacting with your business to establish a benchmark of your customers' experiences. Then you must get cross-functional alignment on what you are trying to achieve, near, mid, and long term. Your execution of that strategy should be based on a customer experience approach: Understand your customer: You need to capture the insights across interactions, channels (including social), and personas to better understand whom to serve, how to serve them, and when to serve them. Not all experiences or customers are equal, so leverage this insight to understand the strategic business objectives you need to address. Then determine which experiences can be improved immediately and which over time to get the result you need. Empower your ecosystem: You need to align your front-line employees with your strategy and give them the power, insight, and tools that allow them to cultivate a culture around strengthening the relationships with your customers. You also need to provide the transparency, access, and collaboration that enable your customers and partners to self serve and self solve and to share with ease. Adapt your business: You need to enable the discipline of agility within your organization and infrastructure so that you can innovate, tailor, and personalize experiences. This needs to be done both reactively from insight and proactively in real time so you can stay ahead of shifting market trends and evolving consumer behaviors. No longer will the old approaches provide the same returns. To compete, differentiate, and win in a world where the customer has the power, you must execute a strategy that is sure to deliver a better brand experience for your customers. Note: This is Part 2 in a three-part series. Part 1 is here. Stop back for Part 3 on November 28.

    Read the article

  • How to properly document functionality in an agile project?

    - by RoboShop
    So recently, we've just finished the first phase of our project. We used agile with fortnightly sprints. And whilst the application turned out well, we're now turning our eyes on some of the maintenance tasks. One maintenance task is that all of our documentation appears in the form of specs. These specs describe 1 or more stories and generally are a body of work which a few devs could knock over in a week. For development, that works really well - every two weeks, the devs get handed a spec and it's a nice discrete chunk of work that they can just do. From a documentation point of view, this has become a mess. The problem with writing specs that are focused on delivering just-in-time requirements to developers is we haven't placed much emphasis on the big picture. Specs come from all different angles - it could be describing a standard function, it could describing parts of a workflow, it could be describing a particular screen... And now, we have business rules about our application scattered across 120 documents. Looking for any document for a particular business rule or function in particular is quite hard because you don't know which document has this information, and making a change request is equally hard because once again, we are unsure about which spec to make the change. So we have maybe a couple of weeks of lull before it's back to specing out functionality for the next phase but in this time, I'd like to re-visit our processes. I think the way we have worked so far in terms of delivering fortnightly specs works well. But we also need a way to manage our documentation so that our business rules for a given function / workflow are easy to locate / change. I have two ideas. One is we compile all of our specs into a series of master specs broken by a few broad functional areas. The specs describe the sprint, the master spec describe the system. The only problem I can see is 1) Our existing 120 specs are not all neatly defined into broad functional areas. Some will require breaking up, merging etc. which will take a lot of time. 2) We'll be writing specs and updating master specs in each new sprint. Seems like double the work, and then do the devs look at the spec or the master spec? My other suggestion is to concede that our documentation is too big of a mess, and manage that mess going forward. So we go through each spec, assign like keywords to it, and then when we want to search for a function, we search for that keyword. Problems I can see 1) Still the problem of business rules scattered everywhere, keywords just make it easier to find it. anyway, if anyone has any decent ideas or any experience to share about how best to manage documentation, would really appreciate it.

    Read the article

  • git workflow for separating commits

    - by gman
    Best practices with git (or any VCS for that matter) is supposed to be to have each commit do the smallest change possible. But, that doesn't match how I work at all. For example I recently I needed to add some code that checked if the version of a plugin to my system matched the versions the system supports. If not print a warning that the plugin probably requires a newer version of the system. While writing that code I decided I wanted the warnings to be colorized. I already had code that colorized error message so I edited that code. That code was in the startup module of one entry to the system. The plugin checking code was in another path that didn't use that entry point so I moved the colorization code into a separate module so both entry points could use it. On top of that, in order to test my plugin checking code works I need to go edit UI/UX code to make sure it tells the user "You need to upgrade". When all is said and done I've edited 10 files, changed dependencies, the 2 entry points are now both dependant on the colorization code, etc etc. Being lazy I'd probably just git add . && git commit -a the whole thing. Spending 10-15 minutes trying to manipulate all those changes into 3 to 6 smaller commits seems frustrating which brings up the question Are there workflows that work for you or that make this process easier? I don't think I can some how magically always modify stuff in the perfect order since I don't know that order until after I start modifying and seeing what comes up. I know I can git add --interactive etc but it seems, at least for me, kind of hard to know what I'm grabbing exactly the correct changes so that each commit is actually going to work. Also, since the changes are sitting in the current directory it doesn't seem like it would be easy to run tests on each commit to make sure it's going to work short of stashing all the changes. And then, if it were to stash and then run the tests, if I missed a few lines or accidentally added a few too many lines I have no idea how I'd easily recover from that. (as in either grab the missing lines from the stash and then put the rest back or take the few extra lines I shouldn't have grabbed and shove them into the stash for the next commit. Thoughts? Suggestions? PS: I hope this is an appropriate question. The help says development methodologies and processes

    Read the article

< Previous Page | 903 904 905 906 907 908 909 910 911 912 913 914  | Next Page >