Search Results

Search found 15099 results on 604 pages for 'runtime environment'.

Page 247/604 | < Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >

  • What should developers know about Windows executable binary file compression?

    - by Peter Turner
    I'd never heard of this before, so shame on me, but programs like UPX can compress my files by 80% which is totally sweet, but I have no idea what the the disadvantages are in doing this. Or even what the compressor does. Website linked above doesn't say anything about dynamically linking DLLs but it mentions about compressing DESCENT 2 and about compressing Netscape 4.06. Also, it doesn't say what the tradeoffs are, only the benefits. If there weren't tradeoffs why wouldn't my linker compress the file? If I have an environment where I have one executable and 20-30 DLL's, some of which are dynamically loaded an unloaded fairly arbitrarily, but not in loops (hopefully), do I take a big hit in processing time decompressing these DLL's when they're used?

    Read the article

  • Rails Easy Data Dumping

    - by Madhan ayyasamy
    Hi Friends,The following useful snippets,you can find out the easiest way of ruby on rails environment data dumping. You’ll often need to get data from production to dev or dev to your local or your local to another developer’s local. One plug-in we use over and over is Yaml_db. This nifty little plug-in enables you to dump or load data by issuing a Rake command. The data is persisted in a yaml file located in db/data.yml. This is very portable and easy to read if you need to examine the data.01rake db:data:dump02 03example data found in db/data.yml04 05---06campaigns:07  columns:08  - id09  - client_id10  - name11  - created_at12  - updated_at13  - token14  records:15  - - "1"16    - "1"17    - First push18    - 2008-11-03 18:23:5319    - 2008-11-03 18:23:5320    - 3f2523f6a66521  - - "2"22    - "2"23    - First push24    - 2008-11-03 18:26:5725    - 2008-11-03 18:26:5726    - 9ee8bc427d94

    Read the article

  • Resources for 2D rendering using OpenGL?

    - by nightcracker
    I noticed that there is quite some difference between 3D and 2D rendering using OpenGL, the techniques are different - pixel-perfect placing is a lot more desirable, among other things. Are there any good (complete) references on using OpenGL for rendering 2D graphics? There are quite a few "tutorials" around on the net that help you open a window, set up a half-decent environment and draw a sprite, but no real good information on rotation, blending, lightning, drawing order, using the z-buffer, particles, "complex" primitives (circles, stars, cross symbols), ensuring pixel-perfect rendering, instancing and many other staple 2D effects/techniques. Any books, great blogs, anything? Any particular awesome libraries to read?

    Read the article

  • June Edition - Oracle Database Insider

    - by jgelhaus
    Now available.  The June edition of the Oracle Database Insider includes: NEWS June 10: Oracle CEO Larry Ellison Live on the Future of Database Performance At a live webcast on June 10 at Oracle’s headquarters, Oracle CEO Larry Ellison is expected to announce the upcoming availability of Oracle Database In-Memory, which dramatically accelerates business decision-making by processing analytical queries in memory without requiring any changes to existing applications. Read More New Study Confirms Capital Expenditure Savings with Oracle Multitenant A new study finds that Oracle Multitenant, an option of Oracle Database 12c, drives significant savings in capital expenditures by enabling the consolidation of a large number of databases on the same number or fewer hardware resources. Read More VIDEO Oracle Database 12c: Multitenant Environment with Tom Kyte Tom Kyte discusses Oracle Multitenant, followed by a demo of the multitenant architecture that includes moving a pluggable database (PDB) from one multitenant container database to another, cloning a PDB, and creating a new PDB.  and much more.

    Read the article

  • Best practices for periodically saving game state to disk

    - by Ben Morris
    I'm working on an MMO. All of the player and environment data lives on a server and is kept in memory. There's a "world" object which keeps track of all of the maps, characters, etc. and their relations to each other. To avoid data loss in case of a crash, I've been periodically serializing the world to disk. The trouble is, this object can be quite large, so when the server starts writing, there's noticeable in-game slowdown for a few seconds, which I'd like to avoid. Any pointers on how to go about this in a more efficient way?

    Read the article

  • How do you develop web applications? [closed]

    - by ck3g
    How do you and/or your team develop your web applications? Language, framework or platform doesn't matter. I would like to know about the structure of your environment. For example: Using IDE on workstation and project files on remote host accessing via sftp. Files are saved instantly on remote host; All files are local and are uploaded on remote host during saving; Files are local, web server is running on local computer and is tested at local host. etc. You could write down also about the benefits of your approach, this will be useful for me. Thanks upd: Here must be a question and here it is: which is the best approach by your opinion?

    Read the article

  • Real World Nuget

    - by JoshReuben
    Why Nuget A higher level of granularity for managing references When you have solutions of many projects that depend on solutions of many projects etc à escape from Solution Hell. Links · Using A GUI (Package Explorer) to build packages - http://docs.nuget.org/docs/creating-packages/using-a-gui-to-build-packages · Creating a Nuspec File - http://msdn.microsoft.com/en-us/vs2010trainingcourse_aspnetmvcnuget_topic2.aspx · consuming a Nuget Package - http://msdn.microsoft.com/en-us/vs2010trainingcourse_aspnetmvcnuget_topic3 · Nuspec reference - http://docs.nuget.org/docs/reference/nuspec-reference · updating packages - http://nuget.codeplex.com/wikipage?title=Updating%20All%20Packages · versioning - http://docs.nuget.org/docs/reference/versioning POC Folder Structure POC Setup Steps · Install package explorer · Source o Create a source solution – configure output directory for projects (Project > Properties > Build > Output Path) · Package o Add assemblies to package from output directory (D&D)- add net folder o File > Export – save .nuspec files and lib contents <?xml version="1.0" encoding="utf-16"?> <package > <metadata> <id>MyPackage</id> <version>1.0.0.3</version> <title /> <authors>josh-r</authors> <owners /> <requireLicenseAcceptance>false</requireLicenseAcceptance> <description>My package description.</description> <summary /> </metadata> </package> o File > Save – saves .nupkg file · Create Target Solution o In Tools > Options: Configure package source & Add package Select projects: Output from package manager (powershell console) ------- Installing...MyPackage 1.0.0 ------- Added file 'NugetSource.AssemblyA.dll' to folder 'MyPackage.1.0.0\lib'. Added file 'NugetSource.AssemblyA.pdb' to folder 'MyPackage.1.0.0\lib'. Added file 'NugetSource.AssemblyB.dll' to folder 'MyPackage.1.0.0\lib'. Added file 'NugetSource.AssemblyB.pdb' to folder 'MyPackage.1.0.0\lib'. Added file 'MyPackage.1.0.0.nupkg' to folder 'MyPackage.1.0.0'. Successfully installed 'MyPackage 1.0.0'. Added reference 'NugetSource.AssemblyA' to project 'AssemblyX' Added reference 'NugetSource.AssemblyB' to project 'AssemblyX' Added file 'packages.config'. Added file 'packages.config' to project 'AssemblyX' Added file 'repositories.config'. Successfully added 'MyPackage 1.0.0' to AssemblyX. ============================== o Packages folder created at solution level o Packages.config file generated in each project: <?xml version="1.0" encoding="utf-8"?> <packages>   <package id="MyPackage" version="1.0.0" targetFramework="net40" /> </packages> A local Packages folder is created for package versions installed: Each folder contains the downloaded .nupkg file and its unpacked contents – eg of dlls that the project references Note: this folder is not checked in UpdatePackages o Configure Package Manager to automatically check for updates o Browse packages - It automatically picked up the updates Update Procedure · Modify source · Change source version in assembly info · Build source · Open last package in package explorer · Increment package version number and re-add assemblies · Save package with new version number and export its definition · In target solution – Tools > Manage Nuget Packages – click on All to trigger refresh , then click on recent packages to see updates · If problematic, delete packages folder Versioning uninstall-package mypackage install-package mypackage –version 1.0.0.3 uninstall-package mypackage install-package mypackage –version 1.0.0.4 Dependencies · <?xml version="1.0" encoding="utf-16"?> <package xmlns="http://schemas.microsoft.com/packaging/2012/06/nuspec.xsd"> <metadata> <id>MyDependentPackage</id> <version>1.0.0</version> <title /> <authors>josh-r</authors> <owners /> <requireLicenseAcceptance>false</requireLicenseAcceptance> <description>My package description.</description> <dependencies> <group targetFramework=".NETFramework4.0"> <dependency id="MyPackage" version="1.0.0.4" /> </group> </dependencies> </metadata> </package> Using NuGet without committing packages to source control http://docs.nuget.org/docs/workflows/using-nuget-without-committing-packages Right click on the Solution node in Solution Explorer and select Enable NuGet Package Restore. — Recall that packages folder is not part of solution If you get downloading package ‘Nuget.build’ failed, config proxy to support certificate for https://nuget.org/api/v2/ & allow unrestricted access to packages.nuget.org To test connectivity: get-package –listavailable To test Nuget Package Restore – delete packages folder and open vs as admin. In nugget msbuild: <Import Project="$(SolutionDir)\.nuget\nuget.targets" /> TFSBuild Integration Modify Nuget.Targets file <RestorePackages Condition="  '$(RestorePackages)' == '' "> True </RestorePackages> … <PackageSource Include="\\IL-CV-004-W7D\Packages" /> Add System Environment variable EnableNuGetPackageRestore=true & restart the “visual studio team foundation build service host” service. Important: Ensure Network Service has access to Packages folder Nugetter TFS Build integration Add Nugetter build process templates to TFS source control For Build Controller - Specify location of custom assemblies Generate .nuspec file from Package Explorer: File > Export Edit the file elements – remove path info from src and target attributes <?xml version="1.0" encoding="utf-16"?> <package xmlns="http://schemas.microsoft.com/packaging/2012/06/nuspec.xsd">     <metadata>         <id>Common</id>         <version>1.0.0</version>         <title />         <authors>josh-r</authors>         <owners />         <requireLicenseAcceptance>false</requireLicenseAcceptance>         <description>My package description.</description>         <dependencies>             <group targetFramework=".NETFramework3.5" />         </dependencies>     </metadata>     <files>         <file src="CommonTypes.dll" target="CommonTypes.dll" />         <file src="CommonTypes.pdb" target="CommonTypes.pdb" /> … Add .nuspec file to solution so that it is available for build: Dev\NovaNuget\Common\NuSpec\common.1.0.0.nuspec Add a Build Process Definition based on the Nugetter build process template: Configure the build process – specify: · .sln to build · Base path (output directory) · Nuget.exe file path · .nuspec file path Copy DLLs to a binary folder 1) Set copy local for an assembly reference to false 2)  MSBuild Copy Task – modify .csproj file: http://msdn.microsoft.com/en-us/library/3e54c37h.aspx <ItemGroup>     <MySourceFiles Include="$(MSBuildProjectDirectory)\..\SourceAssemblies\**\*.*" />   </ItemGroup>     <Target Name="BeforeBuild">     <Copy SourceFiles="@(MySourceFiles)" DestinationFolder="bin\debug\SourceAssemblies" />   </Target> 3) Set Probing assembly search path from app.config - http://msdn.microsoft.com/en-us/library/823z9h8w(v=vs.80).aspx -                 <?xml version="1.0" encoding="utf-8" ?> <configuration>   <runtime>     <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">       <probing privatePath="SourceAssemblies"/>     </assemblyBinding>   </runtime> </configuration> Forcing 'copy local = false' The following generic powershell script was added to the packages install.ps1: param($installPath, $toolsPath, $package, $project) if( $project.Object.Project.Name -ne "CopyPackages") { $asms = $package.AssemblyReferences | %{$_.Name} foreach ($reference in $project.Object.References) { if ($asms -contains $reference.Name + ".dll") { $reference.CopyLocal = $false; } } } An empty project named "CopyPackages" was added to the solution - it references all the packages and is the only one set to CopyLocal="true". No MSBuild knowledge required.

    Read the article

  • Pre-built Oracle VirtualBox Images

    - by james.bayer
    I’m thrilled to see that Justin Kestelyn has a post that pre-built Oracle VirtualBox images are now available on OTN.  There are VMs for various Oracle software stacks including one for Database, one for Java with Glassfish, and one for SOA and BPM products that includes WebLogic Server. This is just one example of the synergy of a combined Oracle and Sun delivering improvements for customers.  These VMs make it even more straight-forward to get started with Oracle software in a development environment without having to worry about initial software installation and configuration. I’ve been a bit quiet lately on the blogging front, but I’m currently working on another area leveraging the best of Oracle and Sun.  Oracle is uniquely positioned to deliver engineered systems that optimize the entire stack of software and hardware.  You’ve probably seen the announcements about Exalogic and I’m excited about the potential to deliver major advancements for middleware.  More to come…

    Read the article

  • Wise settings for Git

    - by Marko Apfel
    These settings reflecting my Git-environment. It a result of reading and trying several ideas of input from others. Must-Haves Aliases [alias] ci = commit st = status co = checkout oneline = log --pretty=oneline br = branch la = log --pretty=\"format:%ad %h (%an): %s\" --date=short df = diff dc = diff --cached lg = log -p lol = log --graph --decorate --pretty=oneline --abbrev-commit lola = log --graph --decorate --pretty=oneline --abbrev-commit --all ls = ls-files ign = ls-files -o -i --exclude-standard Colors [color] ui = auto [color "branch"] current = yellow reverse local = yellow remote = green [color "diff"] meta = yellow bold frag = magenta bold old = red bold new = green bold whitespace = red reverse [color "status"] added = green changed = red untracked = cyan Core [core] autocrlf = true excludesfile = c:/Users/<user>/.gitignore editor = 'C:/Program Files (x86)/Notepad++/notepad++.exe' -multiInst -notabbar -nosession –noPlugin Nice to have Merge and Diff [merge] tool = kdiff3 [mergetool "kdiff3"] path = c:/Program Files (x86)/KDiff3/kdiff3.exe [mergetool "p4merge"] path = c:/Program Files (x86)/Perforce Merge/p4merge.exe cmd = p4merge \"$BASE\" \"$LOCAL\" \"$REMOTE\" \"$MERGED\" keepTemporaries = false trustExitCode = false keepBackup = false [diff] guitool = kdiff3 [difftool "kdiff3"] path = c:/Program Files (x86)/KDiff3/kdiff3.exe [difftool "p4merge"] path = C:/Users/<user>/My Applications/Perforce Merge/p4merge.exe cmd = \"p4merge.exe $LOCAL $REMOTE\" .

    Read the article

  • How to get Visual Studio 2012 working on Ubuntu 12.04.1?

    - by kamil
    I am trying to get visual studio working in Unity (via Wine) without using any virtual machine or other Desktop Environment alternatives. I am convinced Visual Studio is the ultimate IDE for .Net programming languages. I'm not necessarily for dual booting. I have been working more than 10 years on visual studio and I prefer it over other IDEs. I have tried other IDEs but they didn't work too well for me. Does anyone know a way to get this working natively?

    Read the article

  • Use a Fake Http Channel to Unit Test with HttpClient

    - by Steve Michelotti
    Applications get data from lots of different sources. The most common is to get data from a database or a web service. Typically, we encapsulate calls to a database in a Repository object and we create some sort of IRepository interface as an abstraction to decouple between layers and enable easier unit testing by leveraging faking and mocking. This works great for database interaction. However, when consuming a RESTful web service, this is is not always the best approach. The WCF Web APIs that are available on CodePlex (current drop is Preview 3) provide a variety of features to make building HTTP REST services more robust. When you download the latest bits, you’ll also find a new HttpClient which has been updated for .NET 4.0 as compared to the one that shipped for 3.5 in the original REST Starter Kit. The HttpClient currently provides the best API for consuming REST services on the .NET platform and the WCF Web APIs provide a number of extension methods which extend HttpClient and make it even easier to use. Let’s say you have a client application that is consuming an HTTP service – this could be Silverlight, WPF, or any UI technology but for my example I’ll use an MVC application: 1: using System; 2: using System.Net.Http; 3: using System.Web.Mvc; 4: using FakeChannelExample.Models; 5: using Microsoft.Runtime.Serialization; 6:   7: namespace FakeChannelExample.Controllers 8: { 9: public class HomeController : Controller 10: { 11: private readonly HttpClient httpClient; 12:   13: public HomeController(HttpClient httpClient) 14: { 15: this.httpClient = httpClient; 16: } 17:   18: public ActionResult Index() 19: { 20: var response = httpClient.Get("Person(1)"); 21: var person = response.Content.ReadAsDataContract<Person>(); 22:   23: this.ViewBag.Message = person.FirstName + " " + person.LastName; 24: 25: return View(); 26: } 27: } 28: } On line #20 of the code above you can see I’m performing an HTTP GET request to a Person resource exposed by an HTTP service. On line #21, I use the ReadAsDataContract() extension method provided by the WCF Web APIs to serialize to a Person object. In this example, the HttpClient is being passed into the constructor by MVC’s dependency resolver – in this case, I’m using StructureMap as an IoC and my StructureMap initialization code looks like this: 1: using StructureMap; 2: using System.Net.Http; 3:   4: namespace FakeChannelExample 5: { 6: public static class IoC 7: { 8: public static IContainer Initialize() 9: { 10: ObjectFactory.Initialize(x => 11: { 12: x.For<HttpClient>().Use(() => new HttpClient("http://localhost:31614/")); 13: }); 14: return ObjectFactory.Container; 15: } 16: } 17: } My controller code currently depends on a concrete instance of the HttpClient. Now I *could* create some sort of interface and wrap the HttpClient in this interface and use that object inside my controller instead – however, there are a few why reasons that is not desirable: For one thing, the API provided by the HttpClient provides nice features for dealing with HTTP services. I don’t really *want* these to look like C# RPC method calls – when HTTP services have REST features, I may want to inspect HTTP response headers and hypermedia contained within the message so that I can make intelligent decisions as to what to do next in my workflow (although I don’t happen to be doing these things in my example above) – this type of workflow is common in hypermedia REST scenarios. If I just encapsulate HttpClient behind some IRepository interface and make it look like a C# RPC method call, it will become difficult to take advantage of these types of things. Second, it could get pretty mind-numbing to have to create interfaces all over the place just to wrap the HttpClient. Then you’re probably going to have to hard-code HTTP knowledge into your code to formulate requests rather than just “following the links” that the hypermedia in a message might provide. Third, at first glance it might appear that we need to create an interface to facilitate unit testing, but actually it’s unnecessary. Even though the code above is dependent on a concrete type, it’s actually very easy to fake the data in a unit test. The HttpClient provides a Channel property (of type HttpMessageChannel) which allows you to create a fake message channel which can be leveraged in unit testing. In this case, what I want is to be able to write a unit test that just returns fake data. I also want this to be as re-usable as possible for my unit testing. I want to be able to write a unit test that looks like this: 1: [TestClass] 2: public class HomeControllerTest 3: { 4: [TestMethod] 5: public void Index() 6: { 7: // Arrange 8: var httpClient = new HttpClient("http://foo.com"); 9: httpClient.Channel = new FakeHttpChannel<Person>(new Person { FirstName = "Joe", LastName = "Blow" }); 10:   11: HomeController controller = new HomeController(httpClient); 12:   13: // Act 14: ViewResult result = controller.Index() as ViewResult; 15:   16: // Assert 17: Assert.AreEqual("Joe Blow", result.ViewBag.Message); 18: } 19: } Notice on line #9, I’m setting the Channel property of the HttpClient to be a fake channel. I’m also specifying the fake object that I want to be in the response on my “fake” Http request. I don’t need to rely on any mocking frameworks to do this. All I need is my FakeHttpChannel. The code to do this is not complex: 1: using System; 2: using System.IO; 3: using System.Net.Http; 4: using System.Runtime.Serialization; 5: using System.Threading; 6: using FakeChannelExample.Models; 7:   8: namespace FakeChannelExample.Tests 9: { 10: public class FakeHttpChannel<T> : HttpClientChannel 11: { 12: private T responseObject; 13:   14: public FakeHttpChannel(T responseObject) 15: { 16: this.responseObject = responseObject; 17: } 18:   19: protected override HttpResponseMessage Send(HttpRequestMessage request, CancellationToken cancellationToken) 20: { 21: return new HttpResponseMessage() 22: { 23: RequestMessage = request, 24: Content = new StreamContent(this.GetContentStream()) 25: }; 26: } 27:   28: private Stream GetContentStream() 29: { 30: var serializer = new DataContractSerializer(typeof(T)); 31: Stream stream = new MemoryStream(); 32: serializer.WriteObject(stream, this.responseObject); 33: stream.Position = 0; 34: return stream; 35: } 36: } 37: } The HttpClientChannel provides a Send() method which you can override to return any HttpResponseMessage that you want. You can see I’m using the DataContractSerializer to serialize the object and write it to a stream. That’s all you need to do. In the example above, the only thing I’ve chosen to do is to provide a way to return different response objects. But there are many more features you could add to your own re-usable FakeHttpChannel. For example, you might want to provide the ability to add HTTP headers to the message. You might want to use a different serializer other than the DataContractSerializer. You might want to provide custom hypermedia in the response as well as just an object or set HTTP response codes. This list goes on. This is the just one example of the really cool features being added to the next version of WCF to enable various HTTP scenarios. The code sample for this post can be downloaded here.

    Read the article

  • Develop website locally and push updates on Remote Server using Git

    - by John
    Together with a friend we are looking to develop a website (using Symfony2). We are on a Shared Hosting with SSH access. Below is the environment we'd like to setup: * Use git as Version Control (we are new to Git) * Share the tasks and develop on our local machines * Push the updates onto the remote server Here's our initial thoughts on how to do it (assuming Git is already running both locally and remotely): * Install Symfony on the Remote Server (basic setup) * Get a clone (using Git) of the project locally * Develop project locally and push updates (using Git) on the remote server Does this approach make sense, if not, any recommendations? Thanks

    Read the article

  • How do I run a 64-bit guest in VirtualBox?

    - by ændrük
    I would like to have an Ubuntu 11.04 64-bit test environment. When I try booting the Ubuntu 11.04 64-bit installation CD in VirtualBox, the following message is displayed by VirtualBox: VT-x/AMD-V hardware acceleration has been enabled, but is not operational. Your 64-bit guest will fail to detect a 64-bit CPU and will not be able to boot. Please ensure that you have enabled VT-x/AMD-V properly in the BIOS of your host computer. What am I doing wrong? Details: VBox.log, ubuntu-test.vbox, and /proc/cpuinfo. Kernel: Linux aux 2.6.38-8-generic #42-Ubuntu SMP Mon Apr 11 03:31:24 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux The Virtualization setting in the BIOS is set to Enabled.

    Read the article

  • http_proxy - not working with sudo [closed]

    - by Amil
    I am working on RHEL6 behind a proxy at my workplace. In order to make wget and other applications work, I modified my /etc/environment and added definitions for http_proxy, https_proxy, and ftp_proxy. I logged out and back in. As desired, wget works correctly. However, commands like sudo wget and sudo yum do not work. Upon further investigation, I see the reason is because my http_proxy settings are not being shown when I type "sudo env". In fact PATH is also reset. How can I get http_proxy to be set for sudo?

    Read the article

  • The Calmest IT Guy in the World

    - by Markus Weber
    Unplanned outages and downtime still result not only in major productivity losses, but also major financial losses. Along the same lines, if zero planned downtime and zero data loss are key to your IT environment and your business requirements, planning for those becomes very important, all while balancing between performance, high availability, and cost. Oracle Database High Availability technologies will help you achieve these mission-critical goals, and are reflected in Oracle's best practices offerings of the Maximum Availability Architecture, or MAA. We created three neat, short videos showcasing some typical use cases, and highlighting three important components (amongst many more) of MAA: Oracle Real Application Clusters (RAC) Oracle Active Data Guard Oracle Flashback Technologies Make sure to watch those videos here, and learn about challenges, and solutions, around High Availability database environments from a recent Independent Oracle Users Group (IOUG) survey. 

    Read the article

  • HSSFS Part 3: SQL Saturday is Awesome! And DEFAULT_DOMAIN(), and how I found it

    - by Most Valuable Yak (Rob Volk)
    Just a quick post I should've done yesterday but I was recovering from SQL Saturday #48 in Columbia, SC, where I went to some really excellent sessions by some very smart experts.  If you have not yet attended a SQL Saturday, or its been more than 1 month since you last did, SIGN UP NOW! While searching the OBJECT_DEFINITION() of SQL Server system procedures I stumbled across the DEFAULT_DOMAIN() function in xp_grantlogin and xp_revokelogin.  I couldn't find any information on it in Books Online, and it's a very simple, self-explanatory function, but it could be useful if you work in a multi-domain environment.  It's also the kind of neat thing you can find by using this query: SELECT OBJECT_SCHEMA_NAME([object_id]) object_schema, name FROM sys.all_objects WHERE OBJECT_DEFINITION([object_id]) LIKE '%()%'  ORDER BY 1,2 I'll post some elaborations and enhancements to this query in a later post, but it will get you started exploring the functional SQL Server sea. UPDATE: I goofed earlier and said SQL Saturday #46 was in Columbia. It's actually SQL Saturday #48, and SQL Saturday #46 was in Raleigh, NC.

    Read the article

  • Agile Development Requires Agile Support

    - by Matt Watson
    Agile developmentAgile development has become the standard methodology for application development. The days of long term planning with giant Gantt waterfall charts and detailed requirements is fading away. For years the product planning process frustrated product owners and businesses because no matter the plan, nothing ever went to plan. Agile development throws the detailed planning out the window and instead focuses on giving developers some basic requirements and pointing them in the right direction. Constant collaboration via quick iterations with the end users, product owners, and the development team helps ensure the project is done correctly.  The various agile development methodologies have helped greatly with creating products faster, but not without causing new problems. Complicated application deployments now occur weekly or monthly. Most of the products are web-based and deployed as a software service model. System performance and availability of these apps becomes mission critical. This is all much different from the old process of mailing new releases of client-server apps on CD once per quarter or year.The steady stream of new products and product enhancements puts a lot of pressure on IT operations to keep up with the software deployments and adding infrastructure capacity. The problem is most operations teams still move slowly thanks to change orders, documentation, procedures, testing and other processes. Operations can slow the process down and push back on the development team in some organizations. The DevOps movement is trying to solve some of these problems by integrating the development and operations teams more together. Rapid change introduces new problemsThe rapid product change ultimately creates some application problems along the way. Higher rates of change increase the likelihood of new application defects. Delivering applications as a software service also means that scalability of applications is critical. Development teams struggle to keep up with application defects and scalability concerns in their applications. Fixing application problems is a never ending job for agile development teams. Fixing problems before your customers do and fixing them quickly is critical. Most companies really struggle with this due to the divide between the development and operations groups. Fixing application problems typically requires querying databases, looking at log files, reviewing config files, reviewing error logs and other similar tasks. It becomes difficult to work on new features when your lead developers are working on defects from the last product version. Developers need more visibilityThe problem is most developers are not given access to see server and application information in the production environments. The operations team doesn’t trust giving all the developers the keys to the kingdom to log in to production and poke around the servers. The challenge is either give them no access, or potentially too much access. Those with access can still waste time figuring out the location of the application and how to connect to it over VPN. In addition, reproducing problems in test environments takes too much time and isn't always possible. System administrators spend a lot of time helping developers track down server information. Most companies give key developers access to all of the production resources so they can help resolve application defects. The problem is only those key people have access and they become a bottleneck. They end up spending 25-50% of their time on a daily basis trying to solve application issues because they are the only ones with access. These key employees’ time is best spent on strategic new projects, not addressing application defects. This job should fall to entry level developers, provided they have access to all the information they need to troubleshoot the problems.The solution to agile application support is giving all the developers limited access to the production environment and all the server information they need to see. Some companies create their own solutions internally to collect log files, centralize errors or other things to address the problem. Some developers even have access to server monitoring or other tools. But they key is giving them access to everything they need so they can see the full picture and giving access to the whole team. Giving access to everyone scales up the application support team and creates collaboration around providing improved application support.Stackify enables agile application supportStackify has created a solution that can give all developers a secure and read only view of the entire production server environment without console or remote desktop access.They provide a web application that provides real time visibility to the important information that developers need to see. An application centric view enables them to see all of their apps across multiple datacenters and environments. They don’t need to know where the application is deployed, just the name of the application to find it and dig in to see more. All your developers can see server health, application health, log files, config files, windows event viewer, deployment history, application notes, and much more. They can receive email and text alerts when problems arise and even safely query your production databases.Stackify enables companies that do agile development to scale up their application support team by getting more team members involved. The lead developers can spend more time on new projects. Application issues can be fixed quicker than ever. Operations can spend less time helping developers collect server information. Agile application support starts with Stackify. Visit Stackify.com to learn more.

    Read the article

  • Trouble launch byobu with custom windows.NAME file

    - by Brendan Piater
    Apologies if the solution is obvious but I'm stumped! I want to launch byobu from within my Gnome 3 desktop environment on 12.04 but it's not working for me as yet. Here's what I have: created a windows list file ~/.byobu/windows.mrb then tried to launch byobu from within the gnome terminal with: $ BYOBU_WINDOWS=mrb byobu But it gives me the default windows list, NOT my windows list. I followed these instructions from the wiki: http://goo.gl/omgl8 Here is a sanitised version of my windows.mrb file: screen -t local bash screen -t name1 ssh xxx@xxx screen -t name2 ssh root@xxx screen -t name3 bash Appreciate the time and effort. Cheers Brendan

    Read the article

  • Developing an ELO like point system for a multiplayer gaming site

    - by Alejandro Piad
    I'm currently working on a gaming site where users will submit virtual players for different games, like Chess, Nash, Backgammon, Go, etc. The idea is that users don't compete themselves, but through their virtual players. There will be leagues, tournaments, and other competition formats. The question is which would be a good rating system for users in this environment. Take into account that every user may have many different virtual players playing in many different games. As a general guideline I would like to guarantee the following properties: Users who have a lot of mediocre players should not score higher than users with a few very good players. A user with a high rating should not be penalized if he adds a new bad player, until he has had enough time to improve his player. Users who don't play often should not score higher than users who play every day. Thanks in advance.

    Read the article

  • Fast pixelshader 2D raytracing

    - by heishe
    I'd like to do a simple 2D shadow calculation algorithm by rendering my environment into a texture, and then use raytracing to determine what pixels of the texture are not visible to the point light (simply handed to the shader as a vec2 position) . A simple brute force algorithm per pixel would looks like this: line_segment = line segment between current pixel of texture and light source For each pixel in the texture: { if pixel is not just empty space && pixel is on line_segment output = black else output = normal color of the pixel } This is, of course, probably not the fastest way to do it. Question is: What are faster ways to do it or what are some optimizations that can be applied to this technique?

    Read the article

  • PHP developer wanting to learn python

    - by dclowd9901
    I'm pretty familiar at this point with PHP (Javascript, too), up to the point of OOP in PHP, and am looking to branch out my knowledge. I'm looking at Python next, but a lot of it is a bit alien to me as a PHP developer. I'm less concerned about learning the language itself. I'm positive there's plenty of good resources, documentation and libraries to help me get the code down. I'm less sure about the technical aspects of how to set up a dev environment, unit testing and other more mundane details that are very important, aid in rapid development, but aren't as widely covered. Are there any good resources out there for this?

    Read the article

  • Best Way To Develop Robust Cross-Platform Application?

    - by Clay
    Windows C programmer here (going back to 1992 and Windows95 back when it was called Windows93). Can function in C++, but mostly still a C programmer. Looking to build a cross-platform casual game. Very numbers heavy with only a few artistic embellishments and animations, so perhaps a development environment for business apps might be the best option. Or an easy-to-use 2D game dev platform. Target platforms: Windows, Mac, MS Tablet, iPhone, iPad, Android. I currently develop on Windows with Visual Studio 2012, but we could spend up to $50K on hardware/software/middleware if necessary. Not very competent getting open-source software working. Would rather pay the money and jump right into app development. Recommendations?

    Read the article

  • Creating the Business Card Request InfoPath Form

    - by JKenderdine
    Business Card Request Demo Files Back in January I spoke at SharePoint Saturday Virginia Beach about InfoPath forms and Web Part deployment.  Below is some of the information and details regarding the form I created for the session.  There are many blogs and Microsoft articles on how to create a basic form so I won’t repeat that information here.   This blog will just explain a few of the options I chose when creating the solutions for SPS Virginia Beach.  The above link contains the zipped package files of the two InfoPath forms(no code solution and coded solution), the list template for the Location list I used, and the PowerPoint deck.  If you plan to use these templates, you will need to update the forms to work within your own environments (change data connections, code links, etc.).  Also, you must have the SharePoint Enterprise version, with InfoPath Services configured in order to use the Web Browser enabled forms. So what are the requirements for this template? Business Card Request Form Template Design Plan: Gather user information and requirements for card Pull in as much user information as possible. Use data from the user profile web services as a data source Show and hide fields as necessary for requirements Create multiple views – one for those submitting the form and Another view for the executive assistants placing the orders. Browser based form integrated into SharePoint team site Submitted directly to form library The base form was created using the blank template.  The table and rows were added using Insert tab and selecting Custom Table.  The use of tables is a great way to make sure everything lines up.  You do have to split the tables from time to time.  If you’ve ever split cells and then tried to re-align one to find that you impacted the others, you know why.  Here is what the base form looks like in InfoPath.   Show and hide fields as necessary for requirements You will notice I also used Sections within the form.  These show or hide depending on options selected or whether or not fields are blank.  This is a great way to prevent your users from feeling overwhelmed with a large form (this one wouldn’t apply).  Although not used in this one, you can also use various views with a tab interface.  I’ll show that in another post. Gather user information and requirements for card Pull in as much user information as possible. Use data from the user profile web services as a data source Utilizing rules you can load data when the form initiates (Data tab, Form Load).  Anything you can automate is always appreciated by the user as that is data they don’t have to enter.  For example, loading their user id or other user information on load: Always keep in mind though how much data you load and the method for loading that data (through rules, code, etc.).  They have an impact on form performance.  The form will take longer to load if you bring in a ton of data from external sources.  Laura Rogers has a great blog post on using the User Information List to load user information.   If the user has logged into SharePoint, then this can be used quite effectively and without a huge performance hit.   What I have found is that using the User Profile service via code behind or the Web Service “GetUserProfileByName” (as above) can take more time to load the user data.  Just food for thought. You must add the data connection in order for the above rules to work.  You can connect to the data connection through the Data tab, Data Connections or select Manage Data Connections link which appears under the main data source.  The data connections can be SharePoint lists or libraries, SQL data tables, XML files, etc.  Create multiple views – one for those submitting the form and Another view for the executive assistants placing the orders. You can also create multiple views for the users to enhance their experience.  Once they’ve entered the information and submitted their request for business cards, they don’t really need to see the main data input screen any more.  They just need to view what they entered. From the Page Design tab, select New View and give the view a name.  To review the existing views, click the down arrow under View: The ReviewView shows just what the user needs and nothing more: Once you have everything configured, the form should be tested within a Test SharePoint environment before final deployment to production.  This validates you don’t have any rules or code that could impact the server negatively. Submitted directly to form library   You will need to know the form library that you will be submitting to when publishing the template.  Configure the Submit data connection to connect to this library.  There is already one configured in the sample,  but it will need to be updated to your environment prior to publishing. The Design template is different from the Published template.  While both have the .XSN extension, the published template contains all the “package” information for the form.  The published form is what is loaded into Central Admin, not the design template. Browser based form integrated into SharePoint team site In Central Admin, under General Settings, select Manage Form Templates.  Upload the published form template and Activate it to a site collection. Now it is available as a content type to select in the form library.  Some documentation on publishing form templates:  Technet – Manage administrator approved form templates And that’s all our base requirements.  Hope this helps to give a good start.

    Read the article

  • Workflow Overview & Best Practices - EMEA

    - by Annemarie Provisero
    ADVISOR WEBCAST: Workflow Overview & Best Practices - EMEA PRODUCT FAMILY: EBS - ATG - Workflow   February 16, 2011 at 10:00 am CET, 02:30 pm India, 06:00 pm Japan, 08:00 pm Australia This 1.5-hour session is recommended for technical and functional Users who are interested to get an generic overview about the Tools and Utilities available to get a closer look into the Java Virtual Machine used in an E-Business Suite Environment and how to tune it. TOPICS WILL INCLUDE: Introduction of Workflow Useful Utilities and Tools Best Practices Q&A A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support.For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • Benchmarking CPU processing power

    - by Federico Zancan
    Provided that many tools for computers benchmarking are available already, I'd like to write my own, starting with processing power measurement. I'd like to write it in C under Linux, but other language alternatives are welcome. I thought starting from floating point operations per second, but it is just a hint. I also thought it'd be correct to keep track of CPU number of cores, RAM amount and the like, to more consistently associate results with CPU architecture. How would you proceed to the task of measuring CPU computing power? And on top of that: I would worry about a properly minimum workload induced by concurrently running services; is it correct to run benchmarking as a standalone (and possibly avulsed from the OS environment) process?

    Read the article

< Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >