Search Results

Search found 7238 results on 290 pages for 'step'.

Page 154/290 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • Free Updates and Errata for Oracle Linux

    - by Lenz Grimmer
    ISO images of the Oracle Linux installation media as well as individual binary RPMs (and the sources) of major and minor releases (Updates) have always been freely available for download, use and distribution, ever since we started the Oracle Linux support program. We're now taking this a step further: in addition to the above, we will now also provide updated packages or errata for free from separate yum repositories on http://public-yum.oracle.com. If you would like to keep your Oracle Linux system up to date, you can now do so by subscribing your system to the respective "_latest" repository for your distribution, e.g. "ol6_latest" for Oracle Linux 6. See the installation instructions on the public yum front page for details on how enable these repositories. If you would like to also receive free updates to the Unbreakable Enterprise Kernel Release 2, make sure to enable the "[ol6_UEK_latest]" repository as well - updates to the kernel will be made available from this separate channel until it is included in the next set of installation media. Now what does this mean for Oracle Network Support? Getting access to the updates and errata was just one part of the offering – the following benefits will still only be available with an Oracle Linux Support Subscription only: Full indemnification against intellectual property claims. Use of base functionality in Enterprise Manager 12c for Linux and Enterprise Manager OpsCenter for provisioning, patching, management and monitoring of Oracle Linux Access to additional software channels on the Unbreakable Linux Network (ULN) (e.g. DTrace beta releases or ASM support packages) Wim also published a blog post with his take on the announcement.

    Read the article

  • Creating a new project for Team Foundation Server Basic.

    - by Enrique Lima
    We have installed and configured TFS, we have connected to it using Visual Studio.  Now it is time to get a project created. From Team Explorer, we will right click on the servername\Collection item in the tree to select New Team Project. Once selected, this will open the New Team Project dialog.  Provide a name, then click Next.   The next step is to select a Project Template.  By default you will have 2 available (but there are many downloadable options).  It is important to understand what the templates bring and what options we will live with in the Lifecycle Management option we select. Once selected, click Next. Now we are at the point to specify where our code will be collected, Source Code settings part of the wizard.  Since we are starting new, we will select an empty folder. Click Next. Next we get a Summary view of the options selected. Click Finish. Once the template is downloaded, applied and our choices processed, we have completed the project creation.   This should be our final product …

    Read the article

  • Getting started with Access Services —Part1

    - by ybbest
    In SharePoint2010, we can publish the access database to SharePoint and make the access database accessible to users using SharePoint site. Today, I’d like to show you how to publish the access database to SharePoint. 1. Open the access2010 and click New>>Sample templates 2. And then select Issues Web Database (you can select any web database here, I choose the Issues Web Database here) 3. The next step is to publish this access database to SharePoint, you can do so by going to File>>  Save & Publish>> Publish to Access Services 4. Finally, fill in the details of the SharePoint site and site name and publish the database to SharePoint2010.If you need to publish the access database to https SharePoint site, check my previous blog here. 5. You will see the “publish succeeded” 6. Navigate to the site you will see now your Access Database is available online.

    Read the article

  • Restoring GRUB2 on Software RAID 0 after Windows 7 wiped it using Ubuntu 10.10 LiveCD

    - by unknownthreat
    I have installed Ubuntu 10.10 on my system. However, I need to install Windows 7 back, and I expect that it would alter GRUB and it did. Right now, my partition on my Software RAID 0 looks like this: nvidia_acajefec1 is Ubuntu 10.10 and nvidia_acajefec3 is Windows 7. I've been following some guides around and I am always stuck at GRUB not able to detect the usual RAID content. I've tried running: sudo grub > root (hd0,0) GRUB complains it couldn't find my hard disk. So I tried: find (hd0,0) And it complains that it couldn't find anything. So I tried: find /boot/grub/stage1 It said "file not found". Here's the text from the console: ubuntu@ubuntu:~$ grub Probing devices to guess BIOS drives. This may take a long time. [ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename. ] grub> root (hd0,0) root (hd0,0) Error 21: Selected disk does not exist grub> find /boot/grub/stage1 find /boot/grub/stage1 Error 15: File not found Fortunately, I got one person suggesting that what I've been trying to do is for GRUB Legacy, not GRUB2. So I went to the suggested website, ** (http://grub.enbug.org/Grub2LiveCdInstallGuide) **try to look around, and try: ubuntu@ubuntu:~$ sudo fdisk -l Unable to seek on /dev/sda This is just the step 2 of the instruction in the http://grub.enbug.org/Grub2LiveCdInstallGuide and I cannot proceed because it cannot seek /dev/sda. However, ubuntu@ubuntu:~$ sudo dmraid -r /dev/sdb: nvidia, "nvidia_acajefec", stripe, ok, 488397166 sectors, data@ 0 /dev/sda: nvidia, "nvidia_acajefec", stripe, ok, 488397166 sectors, data@ 0 So what now? Do you have an idea for how to make fdisk see my RAID array on live cd (Ubuntu 10.10)? Honestly, I am lost, very lost in trying to restore GRUB2 on this software RAID 0 system right now.

    Read the article

  • Continuous Integration using Docker

    - by Leon Mergen
    One of the main advantages of Docker is the isolated environment it brings, and I want to leverage that advantage in my continuous integration workflow. A "normal" CI workflow goes something like this: Poll repository for changes Pull from repository Install dependencies Run tests In a Dockerized workflow, it would be something like this: Poll repository for changes Pull from repository Build docker image Run docker image as container Run tests Kill docker container My problem is with the "run tests" step: since Docker is an isolated environment, intuitively I would like to treat it as one; this means the preferred method of communication are sockets. However, this only works well in certain situations (a webapp, for example). When testing different kind of services (for example, a background service that only communicated with a database), a different approach would be required. What is the best way to approach this problem? Is it a problem with my application's design, and should I design it in a more TDD, service-oriented way that always listens on some socket? Or should I just give up on isolation, and do something like this: Poll repository for changes Pull from repository Build docker image Run docker image as container Open SSH session into container Run tests Kill docker container SSH'ing into the container seems like an ugly solution to me, since it requires deep knowledge of the contents of the container, and thus break the isolation. I would love to hear SO's different approaches to this problem.

    Read the article

  • SkyDrive and Consumer Cloud Services

    - by Tim Murphy
    Paul Thurrrott recently posted an article on the future of SkyDrive and I was asked what I thought about its future by @UserCommunity.  So let’s take a look. The breakdown from Microsoft that Paul described I believe is an accurate representation of users and usages. While I can’t say that I leverage SkyDrive to the extent that it was meant to be I do enjoy having OneNote hosted their and being able to consult and edit it from the desktop, web and Windows Phone. Taking that one step further is the Midwest Geeks group which started as the community of Microsoft related user groups in our region uses SkyDrive groups and shares calendars and documents.  This collaboration aspect isn’t new in itself, but having it connected with the rest of your cloud assets makes life easier. Another recent usage of this type of cloud service is storing your personal music files in order to get that same universal access.  This is a scenario that has some arguments for and against.  On the one hand own once and listen anywhere is great, but the on the other hand the bandwidth cost becomes a giant downside.  This is especially the case since most carriers are now doing away with unlimited data packages. Ultimately I see this type of resource growing an evolving at a phenomenal rate over the next few years as we continue to become more mobile.  Having multiple players such as SkyDrive and iCloud will only help to give us more options.  Only time will tell where we end up next. del.icio.us Tags: SkyDrive,Cloud Services,Paul Thurrott,UserCommunity

    Read the article

  • How do I configure networking when using XEN on 12.04 Server

    - by Ingram
    I'm following this guide, https://help.ubuntu.com/community/XenProposed, and I'd like to setup a static IP address, but I don't know how. I'm using 12.04 Server. This is the step I'm having trouble at (I'm using eth1): Edit /etc/network/interfaces, and make it look like this: auto lo iface lo inet loopback auto xenbr0 iface xenbr0 inet dhcp bridge_ports eth0 auto eth0 iface eth0 inet manual I do this, and here is my output from ifconfig: eth1 Link encap:Ethernet HWaddr 00:17:a4:f8:79:20 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:478264 errors:0 dropped:0 overruns:0 frame:0 TX packets:2895 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:46470058 (46.4 MB) TX bytes:214620 (214.6 KB) Interrupt:17 Memory:fa000000-fa012800 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) xenbr0 Link encap:Ethernet HWaddr 00:17:a4:f8:79:20 inet addr:192.168.1.122 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::217:a4ff:fef8:7920/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:471842 errors:0 dropped:0 overruns:0 frame:0 TX packets:2795 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:37005780 (37.0 MB) TX bytes:182010 (182.0 KB) So how do I give this system a static IP address? Sorry, I'm not that familiar with networking on Ubuntu.

    Read the article

  • Best practice with branching source code and application lifecycle

    - by Toni Frankola
    We are a small ISV shop and we usually ship a new version of our products every month. We use Subversion as our code repository and Visual Studio 2010 as our IDE. I am aware a lot of people are advocating Mercurial and other distributed source control systems but at this point I do not see how we could benefit from these, but I might be wrong. Our main problem is how to keep branches and main trunk in sync. Here is how we do things today: Release new version (automatically create a tag in Subversion) Continue working on the main trunk that will be released next month And the cycle repeats every month and works perfectly. The problem arises when an urgent service release needs to be released. We cannot release it from the main trunk (2) as it is under heavy development and it is not stable enough to be released urgently. In such case we do the following: Create a branch from the tag we created in step (1) Bug fix Test and release Push the change back to main trunk (if applicable) Our biggest problem is merging these two (branch with main). In most cases we cannot rely on automatic merging because e.g.: a lot of changes has been made to main trunk merging complex files (like Visual Studio XML files etc.) does not work very well another developer / team made changes you do not understand and you cannot just merge it So what you think is the best practice to keep these two different versions (branch and main) in sync. What do you do?

    Read the article

  • CodePlex Daily Summary for Monday, October 07, 2013

    CodePlex Daily Summary for Monday, October 07, 2013Popular ReleasesGhostscript.NET: Ghostscript.NET v.1.1.0.: v.1.1.0. added GhostscriptViewer state handling (SaveState, RestoreState) GhostscriptRasterizer constructor is extended in order to support usage of the existing GhostscriptViewer instance. fixed problem while using a 32-bit assembly with 32-bit version of Ghostscript on 64-bit Windows: It couldn't find a registry key of installed Ghostscript. Reported and fixed by "r0land". v.1.0.9. implemented EPS (Encapsulated PostScript) support for the GhostscriptViewer. added GhostscriptRasterize...Ghostscript Studio: Ghostscript.Studio v.1.0.2: Ghostscript Studio is easy to use Ghostscript IDE, a tool that facilitates the use of the Ghostscript interpreter by providing you with a graphical interface for postscript editing and file conversions. Ghostscript Studio allows you to preview postscript files, edit the code and execute them in order to convert PDF documents and other formats. The program allows you to convert between PDF, Postscript, EPS, TIFF, JPG and PNG by using the Ghostscript.NET Processor. v.1.0.2. added custom -c s...cmdradio: v0.1.1 binary: Default download in win32. For other OS see here. This is alpha version. Please report all bugs.Media Companion: Media Companion MC3.579b: Fixed IMDB actor scraping for Movies and TV. Note: there are a couple of new functions that are not active, as this release needed to be done due to IMDB change. New* TV - context menu for rescrapeing Poster image/Banner image Fixed* Both - Fixed actor scraping from IMDB * Movie - Fixed Tableview if Movie's movieset was not in MC's list of moviesets * Movie - Rename with mediainfo now lowercase. * Both - added ignore "A " in titles. Separate option in General Preferences. * Tv - Changing ...VidCoder: 1.5.7 Beta: Updated HandBrake core to SVN 4819. About dialog now pulls down HandBrake version from the DLL. Added a confirmation dialog to Stop if the encode has been going on for more than 5 minutes. Fixed handling of unicode characters for input and output filenames. We now encode to UTF-8 before passing to HandBrake. Fixed a crash in the queue multiple titles dialog. Added code to rescue tool windows which get placed outside of the visible screen area.Wsus Package Publisher: Release v1.3.1310.05: Enhance the "Reboot Remote Computers", by adding a timer before the reboot occure. So that remote users can save their documents and close applications. You can also add a message to be display. In 'Tools'->'Settings'-> Misc Tab, you can set a default message. Enhance the "Compare Computers against AD", by choosing OUs to include in the comparison.Event-Based Components AppBuilder: AB3.Iteration.53: Iteration 53 (Feature): Allow drag&drop of existing component (flow, step) from component list to chart. Duplicate names are automatically recognized and solved. By the color of the draged component you can see what kind of component (flow or step) is currently draged. New: AddExistingComponentFlow, PartDragDropEventHandler, ExistingStepPreparerPulse: Pulse 0.6.7.3: Pulse is now accepting donations. To donate by Bitcoin or PayPal see https://pulse.codeplex.com/wikipage?title=Donations Lots of updates in v0.6.7.3: (Feature) New option allows you to disable wallpaper changing when a full screen application is running. This way Pulse doesn't slow down/lag your videos and games :) (Fix) Some users were getting Wallbase errors when logging in. This has been fixed. (Feature) Right click a provider and you can now make a copy of it by selecting the "Dupl...MoreTerra (Terraria World Viewer): MoreTerra 1.11.1: Release 1.11.1 =========== =Bug Fixes= =========== Added more tile blocks (Clouds, crimstone) Added items (binoculars, rope, Pirahna Gun) Added ores (Lead, Tin) Chests now work, I broke them yesterday. =============== =Known Issues= =============== I am having trouble with new background walls. So you will see a red outline for crimson then a pink inside. Same with where I think the queen bee lives.VG-Ripper & PG-Ripper: PG-Ripper 1.4.19: NEW: Added Option to login as Guest NEW: Added Menu Option to delete an Forum Account NEW: Added Support for "ImageTeam.org links FIXED: Fixed Ripping of http://forum.babeunion.com ForumsSingle Line Diagram (Electrical) Control Library: Single Line Diagram (Electrical) Control Lib - 1.2: In this release of SLD (Electrical) Control Lib I have fix code for the following symbols... 1. Switch 2. Isolator 3. Lightening Arrestor 4. Meter A new symbol is added for "Pothead Terminal" You can give it a try. I will keep improving and posting the new features.State of Decay Save Manager: Version 1.0.4: Add version at bottom of formCollection Commander for Configuration Manager 2012: CMCollCtr_1.0.0.2: Windows Installer (MSI) setup;New Ribbon Toolbar implemented; more Powershell commands...Classic WiX Burn Theme: Classic WiX Burn Theme 1.0: Project Description A WiX Burn theme inspired by the classic WiX wizard user interface.QuickConverter: QuickConverter 0.7.3 Beta: Allows access to external variables from inside lambda expressions.DNN® Form and List: DNN Form and List 06.00.07: DotNetNuke Form and List 06.00.06 Changes to 6.0.7•Fixed an error in datatypes.config that caused calculated fields to be missing in 6.0.6 Changes to 6.0.6•Add in Sql to remove 'text on row' setting for UserDefinedTable to make SQL Azure compatible. •Add new azureCompatible element to manifest. •Added a fix for importing templates. Changes to 6.0.2•Fix: MakeThumbnail was broken if the application pool was configured to .Net 4 •Change: Data is now stored in nvarchar(max) instead of ntext C...Trace Reader for Microsoft Dynamics CRM: Trace Reader (1.2013.10.3): Fix a bug when the first caracter of a description line is '[' Add search featureSimpleExcelReportMaker: Serm 0.03: SourceCode and Sample .Net Framework 3.5 AnyCPU compile.Application Architecture Guidelines: App Architecture Guidelines 3.0.8: This document is an overview of software qualities, principles, patterns, practices, tools and libraries.BlackJumboDog: Ver5.9.6: 2013.09.30 Ver5.9.6 (1)SMTP???????、???????????????? (2)WinAPI??????? (3)Web???????CGI???????????????????????New ProjectsASP.NET and ASP.NET MVC Blog Test: As a recent convert from Windows Forms & WPF to ASP.NET and ASP.NET MVC, I've created a simple project and implemented it twice using ASP.NET and ASP.NET MVC.cmdradio: Simple command-line interface internet radio playerDa Nang College Of Technology: Cao Ð?ng Công Ngh?DNN Camera Slideshow: The Camera Slideshow module for DNNFolder Iterator: Folder Iterator is a program I made one night to iterate through a folder and output an ordered list of files contained within the chosen directory.foobarpoc: fooJamendo Search Rest: This project uses the Jamendo REST API to search songs by title.Lab Nhibernate, PRISM, WPF, FLUENT, C#, ServiceStack, JSON: This is a lab for new technology and solving real developer problem Merch ASP.NET: This project is a port of Merch built on Zend framework 2 / PHP 5.MyTest2: This is test project2.Paintbear: Paintbear is a free open-source Paint Program and Image Manipulation Program for Windows based on the .NET Framework(version4). Works on Linux,too ! With Wine RasterConversion (.Net framework): Comparison of variants for working with raster (Bitmap class) at a low level. .Net framework, C#SSIS Custom Connection Manager: Example code for creating a Custom Connection Manager in SSIS 2008 and 2012Web Scripting Project: This is a project related to web application development at the Universtity of HertfordshireWebscripting1: New Scripting Website for University of Herts course Write.NET: Another .NET Blogging Engine: Write.NET is currently under development. Write.NET will be lightweight and highly configurable blogging engine built in MVC4Yoer: ????ASP.NET MVC??,????????????

    Read the article

  • When using method chaining, do I reuse the object or create one?

    - by MainMa
    When using method chaining like: var car = new Car().OfBrand(Brand.Ford).OfModel(12345).PaintedIn(Color.Silver).Create(); there may be two approaches: Reuse the same object, like this: public Car PaintedIn(Color color) { this.Color = color; return this; } Create a new object of type Car at every step, like this: public Car PaintedIn(Color color) { var car = new Car(this); // Clone the current object. car.Color = color; // Assign the values to the clone, not the original object. return car; } Is the first one wrong or it's rather a personal choice of the developer? I believe that he first approach may quickly cause the intuitive/misleading code. Example: // Create a car with neither color, nor model. var mercedes = new Car().OfBrand(Brand.MercedesBenz).PaintedIn(NeutralColor); // Create several cars based on the neutral car. var yellowCar = mercedes.PaintedIn(Color.Yellow).Create(); var specificModel = mercedes.OfModel(99).Create(); // Would `specificModel` car be yellow or of neutral color? How would you guess that if // `yellowCar` were in a separate method called somewhere else in code? Any thoughts?

    Read the article

  • Collision detection on a 2D hexagonal grid

    - by SundayMonday
    I'm making a casual grid-based 2D iPhone game using Cocos2D. The grid is a "staggered" hex-like grid consisting of uniformly sized and spaced discs. It looks something like this. I've stored the grid in a 2D array. Also I have a concept of "surrounding" grid cells. Namely the six grid cells surrounding a particular cell (except those on the boundries which can have less than six). Anyways I'm testing some collision detection and it's not working out as well as I had planned. Here's how I currently do collision detection for a moving disc that's approaching the stationary group of discs: Calculate ij-coordinates of grid cell closest to moving cell using moving cell's xy-position Get list of surrounding grid cells using ij-coordinates Examine the surrounding cells. If they're all empty then no collision If we have some non-empty surrounding cells then compare the distance between the disc centers to some minimum distance required for a collision If there's a collision then place the moving disc in grid cell ij So this works but not too well. I've considered a potentially simpler brute force approach where I just compare the moving disc to all stationary discs at each step of the game loop. This is probably feasible in terms of performance since the stationary disc count is 300 max. If not then some space-partitioning data structure could be used however that feels too complex. What are some common approaches and best practices to collision detection in a game like this?

    Read the article

  • How to improve relationships between consultants and staff programmers

    - by Catchops
    I have been a consultant for a small software consulting firm for quite some time now. Our normal business model is not staff augmentation, but such that we find clients who need assistance in building a solution of some kind and then send in a team who can build that solution, work with the existing IT staff, train all involved on supportting that solution, then move on to the next job. We, of course, are still around for any needed ongoing support. We have a great reputation in our area and have been very successful in implementing the solutions that we provide. However, I have noticed a common theme for most of our projects. When we get on-site, there is generally a "stressed" relationship between our team and many of the IT staff currently at the client. I understand completely that there may be some anxiety about our arrival and that defenses can come up when we are around. Many of the folks are understanding and easy to work with, but there are usually some who will not work well with us at all, and who can quickly become a project risk in many ways. We try to go in with open minds and good attitudes, and try NOT to be arrogent or condecending. We generally get deployed when there is a mess to clean up - but we understand that there were reasons decisions were made that got them in the bind they are...so we just try to determine the next step forward and move on. My question is this - I'd like to hear from the IT staff and programmers out there who have had consultants in - what are the things that consultants do that fire up negative feelings and attitudes? What can we do better to make the relationship better, not only in the beginning, but as the project moves forward?

    Read the article

  • Announcing: Oracle Enterprise Manager 12c Delivers Advanced Self-Service Automation for Oracle Database 12c Multitenant

    - by Scott McNeil
    New Self-Service Driven Provisioning of Pluggable Databases Today Oracle announced new capabilities that support managing the full lifecycle of pluggable database as a service in Oracle Enterprise Manager 12c Release 3 (12.1.0.3). This latest release builds on the existing capabilities to provide advanced automation for deploying database as a service using Oracle Database 12c Multitenant option. It takes it one step further by offering pluggable database as a service through Oracle Enterprise Manager 12c self-service portal providing customers with fast provisioning of database cloud services with minimal time and effort. This is a significant addition to Oracle Enterprise Manager 12c’s existing portfolio of cloud services that includes infrastructure as a service, database as a service, testing as a service, and Java platform as a service. The solution provides a self-service mechanism to provision pluggable databases allowing users to request and access database(s) on-demand. The self-service operations are also enabled through REST APIs allowing customers to integrate with third-party automation systems or their custom enterprise portals. Benefits Self-service provisioning allows rapid access to pluggable database as a service for hosting or certifying applications on Oracle Database 12c Self-service driven migration to pluggable database as a service in order to migrate a pre-Oracle Database 12c database to a pluggable database as a service model and test the consolidation strategy Single service catalog for all approved pluggable database as a service configurations which helps customers achieve standardization while catering to all applications and users in the enterprise Resource guarantee via database resource manager (and IORM on Oracle Exadata) that enables deployment of mixed workloads in a shared environment Quota, role based access, and policy based management that enforces governance and reduces administrative overhead Chargeback or showback which improves metering and accountability for services consumed by each pluggable database Comprehensive REST APIs that support integration with ticketing or change management systems, and or with other self-service portals Minimal administrative and maintenance overhead through self-managing automation that allows for intelligent placement of pluggable databases To understand how pluggable database as a service works, watch this quick demo: Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager Cloud Control12c Mobile app

    Read the article

  • CQRS without using others patterns

    - by John Smith
    I would like to explain CQRS to my team of developers. I just can't figure out how to explain it in the simplest way so they can implement the pattern rapidly without any others frameworks. I've read a lot of resources including video and articles but I don't find how to implement CQRS without using others patterns like a service Bus, event sourcing pattern, domain driven design. I know the purpose of these pattern but for the first step, I don't want them to think CQRS and theses patterns must be tied together. My first idea is to say that CQRS is about separating the read part and the write part. The read part is composed only of the UI project, and DAL project. Then the write part is composed of a typical multilayer architecture: UI/BLL/DAL. Then, does CQRS say we must also have two datastore ? What about the notion of commands which reveal the user's intention, is it also something part of CQRS or DDD ? Basically, how to implement CQRS without using others patterns. I concede it's also not that clear in my mind because I've used to work with NCQRS/DDD/Event Sourcing/ServiceBus in my personal project. Thanks

    Read the article

  • Google Updates Picasa Web Albums; Emphasis on Sharing and Showcasing

    - by ETC
    Google has dusted off the Picasa Web interface and updated it with an emphasis on highlighting your photos and the photos of those you’re interested in. The new interface gives you speedy access to all the new photos you’ve uploaded and all the photos your friends, family, and others you’re following are sharing. Mixed in with that are popular photos from talented photographers across the service. It’s a nice change from the previously dull web interface and a definite step towards capturing some of the social power photo sharing site Flickr wields. Hit up the link below to read more. Showcasing Photos From People You Care About [The Official Google Photos Blog] Latest Features How-To Geek ETC Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions Add a “Textmate Style” Lightweight Text Editor with Dropbox Syncing to Chrome and Iron Is the Forcefield Really On or Not? [Star Wars Parody Video] Google Updates Picasa Web Albums; Emphasis on Sharing and Showcasing Uwall.tv Turns YouTube into a Video Jukebox Early Morning Sunrise at the Beach Wallpaper Data Networks Visualized via Light Paintings [Video]

    Read the article

  • Does waterfall require code complete before QA steps in?

    - by P.Brian.Mackey
    The process used at a certain company consists of: Create a layout according to some designs made in a web page design tool. (CSS, html) Requirements come in with "functional requirements". These consist of 100's of lines of business directions. E.G. Create a Table on page X. Column1 has numeric data. Column1 is the client code. Column2 is a string...etc. Write code to meet all functional requirements. When all code is checked in, send to QA (which is the BA that wrote the requirements) for inspection, bug finds and change requests. Punt back to developer with a list of X bugs and Y change requests. While bug finds or change requests 0 go to step 4. The agile development environments I have worked in allow, if not demand, early QA inspection and early user acceptance. So, pieces of the program can be refined and redefined before the entire application is in place. Not only that, but the process leaves little room for error or people changing their minds. Instead, those "change requests" come in at the last stage when they do the most damage. And being that a bug-fix's cost increases over time, this is a costly way to write code. I am no waterfall expert. As described, is this waterfall being mishandled in some way? How does waterfall address my concerns?

    Read the article

  • I want to turn VB.Net Option Strict On

    - by asjohnson
    I recently found out about strong typing in VB.Net (naturally it was on here, thanks!) and am deciding I should take another step toward being a better programmer. I went from vba macros - VB.Net, because I needed a program that I could automate and I never read anything about strong typing, so I kind of fell into the VB.Net default trap. Now I am looking to turn it on and sort out this whole type thing. I was hoping someone could direct me towards some resources to make this transistion as painless as possible. I have read around some and ctype seems to come up a lot, but past that I am at a bit of a loss. What are the benefits of switching? Is there more to it than just using ctype to cast things? I feel like there is a good article that I have failed to come across and any direction would be great. Would a good approach to be to rewrite a program that is written with option strict off and note differences?

    Read the article

  • The Raspberry Pi Now Has Its Own App Store

    - by Jason Fitzpatrick
    Raspberry Pi, the credit-card sized computer with an ARM processor, now has its own appstore where Raspberry Pi hobbyists and developers can share their creations in a one-stop location accessible to all Raspberry Pi users. In today’s press release about the store, the Raspberry Pi Foundation writes: We’ve been amazed by the variety of software that people have written for, or ported to, the Raspberry Pi. Today, together with our friends at IndieCity and Velocix, we’re launching the Pi Store to make it easier for developers of all ages to share their games, applications, tools and tutorials with the rest of the community. The Pi Store will, we hope, become a one-stop shop for all your Raspberry Pi needs; it’s also an easier way into the Raspberry Pi experience for total beginners, who will find everything they need to get going in one place, for free. The store runs as an X application under Raspbian, and allows users to download content, and to upload their own content for moderation and release. At launch, we have 23 free titles in the store, ranging from utilities like LibreOffice and Asterisk to classic games like Freeciv and OpenTTD and Raspberry Pi exclusive Iridium Rising. We also have one piece of commercial content: the excellent Storm in a Teacup from Cobra Mobile. For more information about the store, including how to install the app store on your Pi, check out the full press release here. To get started browsing the store, hit up the link below. Secure Yourself by Using Two-Step Verification on These 16 Web Services How to Fix a Stuck Pixel on an LCD Monitor How to Factory Reset Your Android Phone or Tablet When It Won’t Boot

    Read the article

  • SQL Saturday #220 - Atlanta - Pre-Conference Scholarships!

    - by Most Valuable Yak (Rob Volk)
    We Want YOU…To Learn! AtlantaMDF and Idera are teaming up to find a few good people. If you are: A student looking to work in the database or business intelligence fields A database professional who is between jobs or wants a better one A developer looking to step up to something new On a limited budget and can’t afford professional SQL Server training Able to attend training from 9 to 5 on May 17, 2013 AtlantaMDF is presenting 5 Pre-Conference Sessions (pre-cons) for SQL Saturday #220! And thanks to Idera’s sponsorship, we can offer one free ticket to each of these sessions to eligible candidates! That means one scholarship per Pre-Con! One Recipient Each will Attend: Denny Cherry: SQL Server Security http://sqlsecurity.eventbrite.com/ Adam Machanic: Surfing the Multicore Wave: Processors, Parallelism, and Performance http://surfmulticore.eventbrite.com/ Stacia Misner: Languages of BI http://languagesofbi.eventbrite.com/ Bill Pearson: Practical Self-Service BI with PowerPivot for Excel http://selfservicebi.eventbrite.com/ Eddie Wuerch: The DBA Skills Upgrade Toolkit http://dbatoolkit.eventbrite.com/ If you are interested in attending these pre-cons send an email by April 30, 2013 to [email protected] and tell us: Why you are a good candidate to receive this scholarship Which sessions you’d like to attend, and why (list multiple sessions in order of preference) What the session will teach you and how it will help you achieve your goals The emails will be evaluated by the good folks at Midlands PASS in Columbia, SC. The recipients will be notified by email and announcements made on May 6, 2013. GOOD LUCK! P.S. - Don't forget that SQLSaturday #220 offers free* training in addition to the pre-cons! You can find more information about SQL Saturday #220 at http://www.sqlsaturday.com/220/eventhome.aspx. View the scheduled sessions at http://www.sqlsaturday.com/220/schedule.aspx and register for them at http://www.sqlsaturday.com/220/register.aspx. * Registration charges a $10 fee to cover lunch expenses.

    Read the article

  • Tale of an Encrypted SSIS Package in msdb and a Lost Password

    - by Argenis
      Yesterday a Developer at work asked for a copy of an SSIS package in Production so he could work on it (please, dear Reader – withhold judgment on Source Control – I know!). I logged on to the SSIS instance, and when I went to export the package… Oops. I didn’t have that password. The DBA who uploaded the package to Production is long gone; my fellow DBA had no idea either - and the Devs returned a cricket sound when queried. So I posed the obligatory question on #SQLHelp and a bunch of folks jumped in – some to help and some to make fun of me (thanks, @SQLSoldier @crummel4 @maryarcia and @sqljoe). I tried their suggestions to no avail…even ran some queries to see if I could figure out how to extract the package XML from the system tables in msdb:   SELECT CAST(CAST(p.packagedata AS varbinary(max)) AS varchar(max)) FROM msdb.dbo.sysssispackages p WHERE p.name = 'LePackage'   This just returned a bunch of XML with encrypted data on it:  I knew there was a job in SQL Agent scheduled to execute the package, and when I tried to look at details on the job step I got the following: Not very helpful. The password had to be saved somewhere, but where?? All of a sudden I remembered that there was a system table I hadn’t queried yet: SELECT sjs.command FROM msdb.dbo.sysjobs sj JOIN msdb.dbo.sysjobsteps sjs ON sj.job_id = sjs.job_id WHERE sj.name = 'Run LePackage' The result: “Well, that’s really secure”, I thought to myself. Cheers, -Argenis

    Read the article

  • FoxTales: Behind the Scenes at Fox Software by Kerry Nietz

    Flash backs from the past! It's truly amazing to discover that software development from freshman to senior level as well as project management hasn't changed that much. Kerry Nietz describes his memoir from his final year at college to his first job at Fox Software to 'an early retirement' at Microsoft. This title also brought his other fictional novels to my attention. Once again here is the review I published on Amazon: Built to last! I have been around in software development for more than a decade now but honestly I have to admit it is only now that I took the opportunity to read about the history of my used to be primary programming language. In fact, I started with Visual FoxPro 6 back in 1999 and went only down to FoxPro for Windows 2.6 during migration projects - long after the stories described in this title. It is really interesting to see how they actually managed to create a great product with such a small team of developers. "Create the best Report Writer in the world, out of only sawdust, bubblegum, and dreams." - That's the best sentence I'm going to quote from this title in the future. An inspiration to achieve the impossible, only by taking small steps. Just begin the journey - one step after the next one. If you fall, stand up and continue to walk. Kerry takes the reader on an amazing trip through almost 4 years working at a small software company in Perrysburg, Ohio. That went from a another 'look-alike' of the mighty Ashton-Tate dBase to the leading force in database development, long before Microsoft Access (project name: Cirrus) was even finished. It survived Borland Paradox and even nowadays Visual FoxPro is still in daily use in thousands of companies world-wide. Actually, I'm glad that I had the chance to foster my programming knowledge with Visual FoxPro. After his excellent work in software development, Kerry went for a second career as a writer. I'm looking forward to read his other titles soon:

    Read the article

  • FMw Diagnostic Framework : Automatic Capture of Diagnostic Data Upon First Failure!

    - by Daniel Mortimer
    Introduction There is nothing more frustrating than a problem that "cannot be reproduced". Logs, configuration files have been analysed but there just isn't enough information to establish the root cause. The issue maybe closed, but you are left with the feeling that the problem will raise its ugly head again in the future. Trouble is, to resolve such issues you need to capture diagnostic data at the exact time the incident occurs. Step forward Fusion Middleware Diagnostic Framework!  Diagnostic Framework monitors WebLogic Managed Servers and delivers "Automatic capture of diagnostic data upon first failure". To quote fromOracle Fusion Middleware Administrator's Guide 11g Release 1 (11.1.1)Chapter 13 Diagnosing Problems "When a critical error occurs ... the Diagnostic Framework automatically collects diagnostics, such as thread dumps, DMS metric dumps, and WebLogic Diagnostics Framework (WLDF) server image dumps ... The data is stored in a file-based repository and is accessible with command-line utilities." In other words the data collected upon first failure - especially the thread and image dumps - provides a snapshot of the system as or immediately after the problem occurs. The table below shows the type of WebLogic Server issues which fall into the scope of Diagnostic Framework How to Configure Diagnostic Framework? Depending on your Fusion Middleware product choice you may not need to do anything! Diagnostic Framework is automatically installed, configured and initiated for any WebLogic Domain which has the Oracle Java Required Files (JRF) template applied. This template is applied by default whenever you configure WebLogic Managed Servers for products such as Portal / Forms / Reports / Discoverer Identity Management ( OID , OAM , OIM etc) WebCenter SOA Check your WebLogic Domain directory structure. If you have an "adr" sub directory under DOMAIN_HOME/servers/<servername>/ then JRF template has been applied and Diagnostic Framework will be in play. Should the "adr" sub directory not exist, review the advice given in My Oracle Support article How to Apply FMW ( EM ) Control and JRF to a WebLogic Domain and Managed Servers [ID 947043.1] If you are working with a standalone WebLogic Server solution and applying Oracle JRF is not acceptable, consider using WLDF - WebLogic Diagnostic Framework. (Fusion Middleware Diagnostic Framework makes use of WLDF under the covers.) Couple of useful links about WLDF are listed below Configuring and Using the Diagnostics Framework for Oracle WebLogic Server 11g WebLogic Diagnostics Framework-A Very Useful Tool [A nice blog which describes a WLDF use case] How to Get Started With Diagnostic Framework To be frank, the Fusion Middleware Administrator's Guide is the best place to start your learning Oracle Fusion Middleware Administrator's Guide 11g Release 1 (11.1.1)Chapter 13 Diagnosing Problems A lot of reading here,  but if you are in hurry and just want to get the right information to Oracle Support to help resolve your issue, check out the next section below. How to Upload Diagnostic Framework Incident Data to Oracle Support Some Background Information There are three interfaces to the Repository: Enterprise Manager Cloud Control (Support Workbench) WLST (Command Line) ADRCI (Command Line) The Enterprise Manager Cloud Control does provide a nice GUI interface to search, view and package diagnostic framework incidents. However, this software is not to be confused with Fusion Middleware (EM) Control. Cloud Control (formerly known as Grid Control) is part of the Enterprise Manager media package. EM Cloud Control has it's own install and configuration story. Therefore, for the benefit of those yet to install and play with Cloud Control, I am going to describe how to use the command line tools. Ideally, you would only need to one command line interface, but currently I suggest using both - mainly due to the fact that ADRCI SHOW INCIDENTS does not reveal the description behind the Diagnostic Framework error code. Instructions: Note: WLST and ADRCI are case sensitive when it comes to handling parameter values. If you make a mistake, expect an unfriendly syntax error message. 1) Find the incident Note: The managed server which you are troubleshooting must be up and running. If the managed server is down, ensure the domain's Admin Server is accessible. If you cannot connect to the Admin Server or the Managed Server the example WLST commands will not work. a) Launch WLST  Note: Use the WLST which resides in the "oracle_common" directory (not WL_HOME/common/bin) otherwise you will get a syntax error like the one below Traceback (innermost last):  File "<console>", line 1, in ?NameError: listIncidents MW_HOME/oracle_common/common/bin/wlst.sh b) Connect to the managed server or the admin server e.g. wls:/offline> connect('weblogic','welcome1','t3://localhost:7020') c) Run the command wls:/MyDomain/serverConfig> listIncidents() This will list the incidents for the server to which you have connected. If you have connected to the Admin Server and want to list the incidents for a managed server within the domain, use the command wls:/MyDomain/serverConfig> listIncidents(adrHome='diag\ofm\MyDomain\MyManagedServer' ,server='MyManagedServer') Example output Incident Id     Problem Key              Incident Time         1       DFW-99998 [java.lang.NullPointerException] [oracle.error.simulator.ErrorSimulator.createNullPointerException][errorWebApp_1-0-0-0]        Fri Nov 02 10:38:46 GMT 2012  The piece highlighted in bold is the description you do not see when using the ADRCI 'SHOW INCIDENT' command. Make a note of the incident id. You are ready to move to step 2 2. Package the incident a) Set up the environment - example commands below are for Unix cd <DOMAIN_HOME>/bin . ./setDomainEnv.sh If you want ADRCI to run a Remote Diagnostic Agent collection (recommended) at generate package time, point ORACLE_HOME at oracle_common ORACLE_HOME=$MW_HOME/oracle_common; export ORACLE_HOME To prevent ADRCI from running RDA at generate package time, point ORACLE_HOME at WL_HOME/server/adr directory.  ORACLE_HOME=$WL_HOME/server/adr; export ORACLE_HOME b) Launch adrci $WL_HOME/server/adr/adrci c) Set BASE and HOMEPATH adrci> SET BASE /oracle/middleware/user_projects/domains/ mydomain/servers/mymanagedserver/adr adrci> SET HOMEPATH diag/ofm/mydomain/mymanagedserver d)  Optionally run SHOW INCIDENTS e.g. adrci> SHOW INCIDENTS -MODE DETAIL ADR Home = /oracle/middleware/user_projects/domains/mydomain/ servers/mymanagedserver/adr/diag/ofm/mydomain/mymanagedserver:***********************************************************************************************************************************INCIDENT INFO RECORD 1**********************************************************   INCIDENT_ID                   1   STATUS                        ready   CREATE_TIME                   2012-11-02 10:38:46.468000 +00:00   PROBLEM_ID                    1   CLOSE_TIME                    <NULL>   FLOOD_CONTROLLED              none   ERROR_FACILITY                DFW   ERROR_NUMBER                  99998   ERROR_ARG1                    <NULL>   ERROR_ARG2                    <NULL>   ERROR_ARG3                    <NULL>   ERROR_ARG4                    <NULL>   ERROR_ARG5                    <NULL>   ERROR_ARG6                    <NULL>   ERROR_ARG7                    <NULL>   ERROR_ARG8                    <NULL>   ERROR_ARG9                    <NULL>   ERROR_ARG10                   <NULL>   ERROR_ARG11                   <NULL>   ERROR_ARG12                   <NULL>   SIGNALLING_COMPONENT          <NULL>   SIGNALLING_SUBCOMPONENT       <NULL>   SUSPECT_COMPONENT             <NULL>   SUSPECT_SUBCOMPONENT          <NULL>   ECID                          5162744c6a2eea5e:155ff445:13ac0aae7cb:-8000-0000000000000325   IMPACTS                       01 rows fetched e)  Create a logical package IPS CREATE PACKAGE INCIDENT incident_number e.g. adrci> IPS CREATE PACKAGE INCIDENT 1Created package 1 based on incident id 1, correlation level typical f) Generate the package IPS GENERATE PACKAGE package_number IN path e.g. adrci> IPS GENERATE PACKAGE 1 IN /tmp Generated package 1 in file /tmp/DFW99998j_20121102113633_COM_1.zip, mode complete Note: If the generate package command hangs, ADRCI may be experiencing an issue when running RDA. To avoid such trouble, exit ADRCI and point the ORACLE_HOME environment variable at WL_HOME/server/adr 3) Upload the package zip to Oracle Support via your Service Request a) Log into My Oracle Support and locate your Service Request b) Click on "Add Attachments c) And upload the zip file

    Read the article

  • Start Debugging in Visual Studio

    - by Daniel Moth
    Every developer is familiar with hitting F5 and debugging their application, which starts their app with the Visual Studio debugger attached from the start (instead of attaching later). This is one way to achieve step 1 of the Live Debugging process. Hitting F5, F11, Ctrl+F10 and the other ways to start the process under the debugger is covered in this MSDN "How To". The way you configure the debugging experience, before you hit F5, is by selecting the "Project" and then the "Properties" menu (Alt+F7 on my keyboard bindings). Dependent on your project type there are different options, but if you browse to the Debug (or Debugging) node in the properties page you'll have a way to select local or remote machine debugging, what debug engines to use, command line arguments to use during debugging etc. Currently the .NET and C++ project systems are different, but one would hope that one day they would be unified to use the same mechanism and UI (I don't work on that product team so I have no knowledge of whether that is a goal or if it will ever happen). Personally I like the C++ one better, here is what it looks like (and it is described on this MSDN page): If you were following along in the "Attach to Process" blog post, the equivalent to the "Select Code Type" dialog is the "Debugger Type" dropdown: that is how you change the debug engine. Some of the debugger properties options appear on the standard toolbar in VS. With Visual Studio 11, the Debug Type option has been added to the toolbar If you don't see that in your installation, customize the toolbar to show it - VS 11 tends to be conservative in what you see by default, especially for the non-C++ Visual Studio profiles. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Copy Formatting in Word

    - by Ahamad Patan
    Many a times you may need to copy the "Format" in Word. The "Copy Format" feature lets you quickly and easily "copy" all the formatting characteristics from one group of selected text to another. This is helpful when you have several headings that you want consistent formatting. Here are steps on how to Copy Formatting: 1. Select, or highlight, the item of text containing the format you wish to copy. 2. Office 2003 - Click on the Format Painter Button in the Standard Toolbar (looks like Paintbrush). Office 2007 - Format Painter Button is located on the Home tab (looks like a Paintbrush). Office 2003 - An I-beam with a small cross to the left will appear as you move your mouse. Office 2007 - An I-beam with a small paintbrush will appear as you move your mouse. 3. Select the text you wish to copy the formatting to. 4. Formatting of the selected text will automatically change. For multiple formatting changes, double-click on the Format Painter button in Step 2. Remember, you'll have to click it again to deselect it or press Esc.

    Read the article

  • Stereo images rectification and disparity: which algorithms?

    - by alessandro.francesconi
    I'm trying to figure out what are currently the two most efficent algorithms that permit, starting from a L/R pair of stereo images created using a traditional camera (so affected by some epipolar lines misalignment), to produce a pair of adjusted images plus their depth information by looking at their disparity. Actually I've found lots of papers about these two methods, like: "Computing Rectifying Homographies for Stereo Vision" (Zhang - seems one of the best for rectification only) "Three-step image recti?cation" (Monasse) "Rectification and Disparity" (slideshow by Navab) "A fast area-based stereo matching algorithm" (Di Stefano - seems a bit inaccurate) "Computing Visual Correspondence with Occlusions via Graph Cuts" (Kolmogorov - this one produces a very good disparity map, with also occlusion informations, but is it efficient?) "Dense Disparity Map Estimation Respecting Image Discontinuities" (Alvarez - toooo long for a first review) Anyone could please give me some advices for orienting into this wide topic? What kind of algorithm/method should I treat first, considering that I'll work on a very simple input: a pair of left and right images and nothing else, no more information (some papers are based on additional, pre-taken, calibration infos)? Speaking about working implementations, the only interesting results I've seen so far belongs to this piece of software, but only for automatic rectification, not disparity: http://stereo.jpn.org/eng/stphmkr/index.html I tried the "auto-adjustment" feature and seems really effective. Too bad there is no source code...

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >