Search Results

Search found 31989 results on 1280 pages for 'get method'.

Page 708/1280 | < Previous Page | 704 705 706 707 708 709 710 711 712 713 714 715  | Next Page >

  • How to Create a Minimize All Windows (Win + M) Hotkey for Mac OS X

    - by The Geek
    Windows users have been able to minimize every window on their desktop ever since keyboards with the Win key started showing up — just tap WIN + M on your keyboard, and every window is minimized. For Mac OS X, it’s not quite as simple. You can, of course, use the CMD + OPT + H + M shortcut key combination to hide most windows… but that’s a lot of keys to hit at once, and it doesn’t always minimize everything in my experience. So like everything else I wanted from Windows, it was time to figure out how to get it on OS X as well. This method uses QuickSilver to provide the shortcut key trigger — if there’s a better way to do that, please let us know. Luckily OS X includes a nice scripting platform, and we can use the following script from a helpful person over at SuperUser to make this all happen. tell application "Finder" to activate tell application "System Events" tell application process "Finder" tell menu bar 1 click menu item "Hide Others" of menu of menu bar item "Finder" click menu item "Minimize All" of menu of menu bar item "Window" end tell end tell end tell Open up a new AppleScript Editor window and paste in the script from above. Then go to File and Save.    

    Read the article

  • ubuntu one syncronizing problems

    - by user72249
    I am a user of Ubuntu and Ubuntu One. First: I had a serious problem with ubuntu one. I have a windows and several Ubuntu machine and they all sync one account. My first problem is that i can't really delete files. Whenever I delete something Ubu One resync it, so it appears again in my folder. Furthermore, when i move something to another dir it synconize the new location and resync the old one, so i got doubled my files. I can't reorganize my files. So i tried to delete the duplicated files through the web dash, but i cant reorder and select multiple files by extension. For example i wanted to move all PDF to another location... Second: I can't configure in the ubuntu one app the loacation of the main One folder. For example i want my One folder to be on another partition than my HOME folder. It is a main problem in linux and windows also. I tried to move that folder in windows with the hardlink method. So i uninstalled the U1, then i created a folder link to another drive, than installed the U1. THEN i lost all my files in my U1. Lucky thing that i have a backup, but i thought U1 is a stable solution for me, and i planned to extend my space, but these bugs are major problems! It's worth to pay for it if it's working perfectly. I think it's more important than releasing a new version in every month.

    Read the article

  • Generating Normal map from a Image with a given Albedo map

    - by snape
    I am working on a research problem part of which involves generating normal map from a given image of a rusted object. I searched the internet for techniques to achieve the above and apparently crazybump is mentioned a lot. I tried it but it didn't produce the desirable effects. Also I am looking for a method which draws inspiration from an existing research paper not some closed source software. I turned my attention to the technique described in the this paper. Results from this technique are satisfactory for normal objects because of bias in the training data but it doesn't work very well in the case of rusted objects. After this I focussed my attention on generating Albedo map (the above problem would become more solvable if Albedo map is obtained). Fortunately I am able to generate pretty good albedo maps for images of rusted objects. I used this paper's approach to generate Albedo maps. Now I want to know a good technique to get Normal map given an image and it's corresponding Albedo map. To give you an idea of what kind of images I am working with I am attaching a sample. Links to research material would be really appreciated. Thanks!

    Read the article

  • Circular dependency and object creation when attempting DDD

    - by Matthew
    I have a domain where an Organization has People. Organization Entity public class Organization { private readonly List<Person> _people = new List<Person>(); public Person CreatePerson(string name) { var person = new Person(organization, name); _people.Add(person); return person; } public IEnumerable<Person> People { get { return _people; } } } Person Entity public class Person { public Person(Organization organization, string name) { if (organization == null) { throw new ArgumentNullException("organization"); } Organization = organization; Name = name; } public Organization { get; private set; } public Name { get; private set; } } The rule for this relationship is that a Person must belong to exactly one Organization. The invariants I want to guarantee are: A person must have an organization this is enforced via the Person's constuctor An organization must know of its people this is why the Organization has a CreatePerson method A person must belong to only one organization this is why the organization's people list is not publicly mutable (ignoring the casting to List, maybe ToEnumerable can enforce that, not too concerned about it though) What I want out of this is that if a person is created, that the organization knows about its creation. However, the problem with the model currently is that you are able to create a person without ever adding it to the organizations collection. Here's a failing unit-test to describe my problem [Test] public void AnOrganizationMustKnowOfItsPeople() { var organization = new Organization(); var person = new Person(organization, "Steve McQueen"); CollectionAssert.Contains(organization.People, person); } What is the most idiomatic way to enforce the invariants and the circular relationship?

    Read the article

  • Introduction to LinqPad Driver for StreamInsight 2.1

    - by Roman Schindlauer
    We are announcing the availability of the LinqPad driver for StreamInsight 2.1. The purpose of this blog post is to offer a quick introduction into the new features that we added to the StreamInsight LinqPad driver. We’ll show you how to connect to a remote server, how to inspect the entities present of that server, how to compose on top of them and how to manage their lifetime. Installing the driver Info on how to install the driver can be found in an earlier blog post here. Establishing connections As you click on the “Add Connection” link in the left pane you will notice that now it’s possible to build the data context automatically. The new driver appears as an option in the upper list, and if you pick it you will open a connection dialog that lets you connect to a remote StreamInsight server. The connection dialog lets you specify the address of the remote server. You will notice that it’s possible to pick up the binding information from the configuration file of the LinqPad application (which is normally in the same folder as LinqPad.exe and is called LinqPad.exe.config). In order for the context to be generated you need to pick an application from the server. The control is editable hence you can create a new application if you don’t want to make changes to an existing application. If you choose a new application name you will be prompted for confirmation before this gets created. Once you click OK the connection is created and you can start issuing queries against the remote server. If there’s any connectivity error the connection is marked with a red X and you can see the error message informing you what went wrong (i.e., the remote server could not be reached etc.). The context for remote servers Let’s take a look at what happens after we are connected successfully. Every LinqPad query runs inside a context – think of it as a class that wraps all the code that you’re writing. If you’re connecting to a live server the context will contain the following: The application object itself. All entities present in this application (sources, sinks, subjects and processes). The picture below shows a snapshot of the left pane of LinqPad after a successful connection. Every entity on the server has a different icon which will allow users to figure out its purpose. You will also notice that some entities have a string in parentheses following the name. It should be interpreted as such: the first name is the name of the property of the context class and the second name is the name of the entity as it exists on the server. Not all valid entity names are valid identifier names so in cases where we had to make a transformation you see both. Note also that as you hover over the entities you get IntelliSense with their types – more on that later. Remoting is not supported As you play with the entities exposed by the context you will notice that you can’t read and write directly to/from them. If for instance you’re trying to dump the content of an entity you will get an error message telling you that in the current version remoting is not supported. This is because the entity lives on the remote server and dumping its content means reading the events produced by this entity into the local process. ObservableSource.Dump(); Will yield the following error: Reading from a remote 'System.Reactive.Linq.IQbservable`1[System.Int32]' is not supported. Use the 'Microsoft.ComplexEventProcessing.Linq.RemoteProvider.Bind' method to read from the source using a remote observer. This basically tells you that you can call the Bind() method to direct the output of this source to a sink that has to be defined on the remote machine as well. You can’t bring the results to the LinqPad window unless you write code specifically for that. Compose queries You may ask – what's the purpose of all that? After all the same information is present in the EventFlowDebugger, why bother with showing it in LinqPad? First of all, What gets exposed in LinqPad is not what you see in the debugger. In LinqPad we have a property on the context class for every entity that lives on the server. Because LinqPad offers IntelliSense we in fact have much more information about the entity, and more importantly we can compose with that entity very easily. For example, let’s say that this code creates an entity: using (var server = Server.Connect(...)) {     var a = server.CreateApplication("WhiteFish");     var src = a         .DefineObservable<int>(() => Observable.Range(0, 3))         .Deploy("ObservableSource"); If later we want to compose with the source we have to fetch it and then we can bind something to     a.GetObservable<int>("ObservableSource)").Bind(... This means that we had to know a bunch of things about this: that it’s a source, that it’s an observable, it produces a result with payload Int32 and it’s named “ObservableSource”. Only the second and last bits of information are present in the debugger, by the way. As you type in the query window you see that all the entities are present, you get IntelliSense support for them and it’s much easier to make sense of what’s available. Let’s look at a scenario where composition is plausible. With the new programming model it’s possible to create “cold” sources that are parameterized. There was a way to accomplish that even in the previous version by passing parameters to the adapters, but this time it’s much more elegant because the expression declares what parameters are required. Say that we hover the mouse over the ThrottledSource source – we will see that its type is Func<int, int, IQbservable<int>> - this in effect means that we need to pass two int parameters before we can get a source that produces events, and the type for those events is int – in the particular case of my example I had the source produce a range of integers and the two parameters were the start and end of the range. So we see how a developer can create a source that is not running yet. Then someone else (e.g. an administrator) can pass whatever parameters appropriate and run the process. Proxy Types Here’s an interesting scenario – what if someone created a source on a server but they forgot to tell you what type they used. Worse yet, they might have used an anonymous type and even though they can refer to it by name you can’t figure out how to use that type. Let’s walk through an example that shows how you can compose against types you don’t need to have the definition of. This is how we can create a source that returns an anonymous type: Application.DefineObservable(() => Observable.Range(1, 10).Select(i => new { I = i })).Deploy("O1"); Now if we refresh the connection we can see the new source named O1 appear in the list. But what’s more important is that we now have a type to work with. So we can compose a query that refers to the anonymous type. var threshold = new StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0<int>(5); var filter = from i in O1              where i > threshold              select i; filter.Deploy("O2"); You will notice that the anonymous type defined with this statement: new { I = i } can now be manipulated by a client that does not have access to it because the LinqPad driver has generated another type in its stead, named StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0. This type has all the properties and fields of the type defined on the server, except in this case we can instantiate values and use it to compose more queries. It is worth noting that the same thing works for types that are not anonymous – the test is if the LinqPad driver can resolve the type or not. If it’s not possible then a new type will be generated that approximates the type that exists on the server. Control metadata In addition to composing processes on top of the existing entities we can do other useful things. We can delete them – nothing new here as we simply access the entities through the Entities collection of the application class. Here is where having their real name in parentheses comes handy. There’s another way to find out what’s behind a property – dump its expression. The first line in the output tells us what’s the name of the entity used to build this property in the context. Runtime information So let’s create a process to see what happens. We can bind a source to a sink and run the resulting process. If you right click on the connection you can refresh it and see the process present in the list of entities. Then you can drag the process to the query window and see that you can have access to process object in the Processes collection of the application. You can then manipulate the process (delete it, read its diagnostic view etc.). Regards, The StreamInsight Team

    Read the article

  • PHP remote development workflow: git, symfony and hudson

    - by user2022
    I'm looking to develop a website and all the work will be done remotely (no local dev server). The reason for this is that my shared hosting company a2hosting has a specific configuration (symfony,mysql,git) that I don't want to spend time duplicating when I can just ssh and develop remotely or through netbeans remote editing features. My question is how can I use git to separate my site into three areas: live, staging and dev. Here's my initial thought: public_html (live site and git repo) testing: a mirror of the site used for visual tests (full git repo) dev/ticket# : git branches of public_html used for features and bug fixes (full git repo) Version Control with git: Initial setup: cd public_html git init git add * git commit -m ‘initial commit of the site’ cd .. git clone public_html testing mkdir dev Development: cd /dev git clone ../testing ticket# all work is done in ./dev/ticket#, then visit www.domain.com/dev/ticket# to visually test make granular commits as necessary until dev is done git push origin master:ticket# if the above fails: merge latest testing state into current dev work: git merge origin/master then try the push again mark ticket# as ready for integration integration and deployment process: cd ../../testing git merge ticket# -m "integration test for ticket# --no-ff (check for conflicts ) run hudson tests visit www.domain.com/testing for visual test if all tests pass: if this ticket marks the end of a big dev sprint: make a snapshot with git tag git push --tags origin else git push origin cd ../public_html git checkout -f (live site should have the latest dev from ticket#) else: revert the merge: git checkout master~1; git commit -m "reverting ticket#" update ticket# that testing failed with the failure details Snapshots: Each major deployment sprint should have a standard name and should be tracked. Method: git tag Naming convention: TBD Reverting site to previous state If something goes wrong, then revert to previous snapshot and debug the issue in dev with a new ticket#. Once the bug is fixed, follow the deployment process again. My questions: Does this workflow make sense, if not, any recommendations Is my approach for reverting correct or is there a better way to say 'revert to before x commit'

    Read the article

  • Munq is for web, Unity is for Enterprise

    - by oazabir
    The Unity Application Block (Unity) is a lightweight extensible dependency injection container with support for constructor, property, and method call injection. It’s a great library for facilitating Inversion of Control and the recent version supports AOP as well. However, when it comes to performance, it’s CPU hungry. In fact it’s so CPU hungry that it makes it impossible to make it work at Internet Scale. I was investigating some CPU issue on a portal that gets around 3MM hits per day and I found unusually high CPU. Here’s why: I did some CPU profiling on my open source project Dropthings and found that the highest CPU is consumed by Unity’s Resolve<>(). There’s no funky use of Unity in the project. Straightforward Register<>() and Resolve<>(). But as you can see, Resolve<>() is consuming significantly high CPU even after the site is warm and has been running for a while. Then I tried Munq, which is a basic Dependency Injection Container. It has everything you will usually need in a regular project. It boasts to be the fastest DI out there. So, I converted all Unity code to Munq in Dropthings and did a CPU profile and Whala!   There’s no trace of any Munq calls anywhere. That proves Munq is a lot faster than Unity.

    Read the article

  • CodePlex Daily Summary for Sunday, December 05, 2010

    CodePlex Daily Summary for Sunday, December 05, 2010Popular ReleasesSubtitleTools: SubtitleTools 1.0: First public releaseMiniTwitter: 1.62: MiniTwitter 1.62 ???? ?? ??????????????????????????????????????? 140 ?????????????????????????? ???????????????????????????????? ?? ??????????????????????????????????Phalanger - The PHP Language Compiler for the .NET Framework: 2.0 (December 2010): The release is targetted for stable daily use. With improved performance and enhanced compatibility with several latest PHP open source applications; it makes this release perfect replacement of your old PHP runtime. Changes made within this release include following and much more: Performance improvements based on real-world applications experience. We determined biggest bottlenecks and we found and removed overheads causing performance problems in many PHP applications. Reimplemented nat...Chronos WPF: Chronos v2.0 Beta 3: Release notes: Updated introduction document. Updated Visual Studio 2010 Extension (vsix) package. Added horizontal scrolling to the main window TaskBar. Added new styles for ListView, ListViewItem, GridViewColumnHeader, ... Added a new WindowViewModel class (allowing to fetch data). Added a new Navigate method (with several overloads) to the NavigationViewModel class (protected). Reimplemented Task usage for the WorkspaceViewModel.OnDelete method. Removed the reflection effect...MDownloader: MDownloader-0.15.26.7024: Fixed updater; Fixed MegauploadDJ - jQuery WebControls for ASP.NET: DJ 1.2: What is new? Update to support jQuery 1.4.2 Update to support jQuery ui 1.8.6 Update to Visual Studio 2010 New WebControls with samples added Autocomplete WebControl Button WebControl ToggleButt WebControl The example web site is including in source code project.LateBindingApi.Excel: LateBindingApi.Excel Release 0.7g: Unterschiede zur Vorgängerversion: - Zusätzliche Interior Properties - Group / Ungroup Methoden für Range - Bugfix COM Reference Handling für Application Objekt in einigen Klassen Release+Samples V0.7g: - Enthält Laufzeit DLL und Beispielprojekte Beispielprojekte: COMAddinExample - Demonstriert ein versionslos angebundenes COMAddin Example01 - Background Colors und Borders für Cells Example02 - Font Attributes undAlignment für Cells Example03 - Numberformats Example04 - Shapes, WordArts, P...ESRI ArcGIS Silverlight Toolkit: November 2010 - v2.1: ESRI ArcGIS Silverlight Toolkit v2.1 Added Windows Phone 7 build. New controls added: InfoWindow ChildPage (Windows Phone 7 only) See what's new here full details for : http://help.arcgis.com/en/webapi/silverlight/help/#/What_s_new_in_2_1/016600000025000000/ Note: Requires Visual Studio 2010, .NET 4.0 and Silverlight 4.0.ASP .NET MVC CMS (Content Management System): Atomic CMS 2.1.1: Atomic CMS 2.1.1 release notes Atomic CMS installation guide Winware: Winware 3.0 (.Net 4.0): Winware 3.0 is base on .Net 4.0 with C#. Please open it with Visual Studio 2010. This release contains a lab web application.UltimateJB: UltimateJB 2.02 PL3 KAKAROTO + CE-X-3.41 EvilSperm: Voici une version attendu avec impatience pour beaucoup : - La Version CEX341 pour pouvoir jouer avec des jeux demandant le firmware 3.50 ( certain ne fonctionne tous simplement pas ). - Pour l'instant le CEX341 n'est disponible qu'avec les PS3 en firmwares 3.41 !!! - La version PL3 KAKAROTO intégre ses dernières modification et intégre maintenant le firmware 3.30 !!! Conclusion : - UltimateJB CEX341 => Spoof le Firmware 3.41 en 3.50 ( facilite l'utilisation de certain jeux avec openManage...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.6.5 beta Released: Hi, Today we are releasing Visifire 3.6.5 beta with the following new feature: New property AutoFitToPlotArea has been introduced in DataSeries. AutoFitToPlotArea will bring bubbles inside the PlotArea in order to avoid clipping of bubbles in bubble chart. Also this release includes few bug fixes: AxisXLabel label were getting clipped if angle was set for AxisLabels and ScrollingEnabled was not set in Chart. If LabelStyle property was set as 'Inside', size of the Pie was not proper. Yo...EnhSim: EnhSim 2.1.1: 2.1.1This release adds in the changes for 4.03a. To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Switched Searing Flames bac...AI: Initial 0.0.1: It’s simply just one code file; it simulates AI and machine in a simulated world. The AI has a little understanding of its body machine and parts, and able to use its feet to do actions just start and stop walking. The world is all of white with nothing but just the machine on a white planet. Colors, odors and position information make no sense. I’m previous C# programmer and I’m learning F# during this project, although I’m still not a good F# programmer, in this project I learning to prog...NKinect: NKinect Preview: Build features: Accelerometer reading Motor serial number property Realtime image update Realtime depth calculation Export to PLY (On demand) Control motor LED Control Kinect tiltMicrosoft - Domain Oriented N-Layered .NET 4.0 App Sample (Microsoft Spain): V1.0 - N-Layer DDD Sample App .NET 4.0: Required Software (Microsoft Base Software needed for Development environment) Visual Studio 2010 RTM & .NET 4.0 RTM (Final Versions) Expression Blend 4 SQL Server 2008 R2 Express/Standard/Enterprise Unity Application Block 2.0 - Published May 5th 2010 http://www.microsoft.com/downloads/en/details.aspx?FamilyID=2D24F179-E0A6-49D7-89C4-5B67D939F91B&displaylang=en http://unity.codeplex.com/releases/view/31277 PEX & MOLES 0.94.51023.0, 29/Oct/2010 - Visual Studio 2010 Power Tools http://re...Sense/Net Enterprise Portal & ECMS: SenseNet 6.0.1 Community Edition: Sense/Net 6.0.1 Community Edition This half year we have been working quite fiercely to bring you the long-awaited release of Sense/Net 6.0. Download this Community Edition to see what we have been up to. These months we have worked on getting the WebCMS capabilities of Sense/Net 6.0 up to par. New features include: New, powerful page and portlet editing experience. HTML and CSS cleanup, new, powerful site skinning system. Upgraded, lightning-fast indexing and query via Lucene. Limita...Minecraft GPS: Minecraft GPS 1.1.1: New Features Compass! New style. Set opacity on main window to allow overlay of Minecraft. Open World in any folder. Fixes Fixed style so listbox won't grow the window size. Fixed open file dialog issue on non-vista kernel machines.DotSpatial: DotSpatial 11-28-2001: This release introduces some exciting improvements. Support for big raster, both in display and changing the scheme. Faster raster scheme creation for all rasters. Caching of the "sample" values so once obtained the raster symbolizer dialog loads faster. Reprojection supported for raster and image classes. Affine transform fully supported for images and rasters, so skewed images are now possible. Projection uses better checks when loading unprojected layers. GDAL raster support f...SuperWebSocket: SuperWebSocket(60438): It is the first release of SuperWebSocket. Because it is base on SuperSocket, most features of SuperSocket are supported in SuperWebSocket. The source code include a LiveChat demo.New ProjectsBambook???: ????Bambook???????。Beespot: Beespot is an easy to use, secure, robust and powerful Honeypot for the SSH Service written in Python. caitanzhangDemo: this is my demoColorPicker [SA:MP]: ColorPicker [SA:MP] is a simple tool that generates: - PAWN Hex Color Codes (useful for SAMP Scripts); - ARGB Color Codes; - HTML Color Codes; It's developed in C#.Conversions-n-Stuff: Conversions-n-Stuff (CNS) is a program focused on making it easier for anyone to convert from one measurement to another. There is no need to know the calculations and formulas! Just fill in the forms and click, and you have your answer! CNS leverages C#, WPF, and Silverlight.dotP2P: dotP2P would consist of servers running caches to keep track of domain and nameserver records. Cache servers can be created with any server that supports XML-RPC or SOAP. MySQL is used to store the the cache data. EmailMasterTemplate: This user master and child user control based email template engine.Ezekiel: Ezekiel is a Windows application that leverages a user's existing BusinessObjects reports to provide a custom read-only front end for a database. It's developed using Visual C# 2010 Express.F# Colorizer Editor: Standalone Colorizer Editor for Brian's Fsharp Deep Colorizer VS ExtensionGameCore: Core engine for game services for mobile and RIA clientsGestão de contas bancárias: Trabalho final de Matematica Aplicada da UATLAGPP: GPPkmean: Kmeans ClusteringPHP ORM: ??????orm???,??PHP????,??????????????!Steampunk Odyssey: Steampunk Odyssey is a side-scrolling action game based on the XNA platformSubtitleTools: SubtitleTools is a small utility that helps modifying existing subtitles or downloading new ones based on the digital signatures of your movie files from opensubtitles.org site.Windows Phone 7 Accelerometer: Accelerometer for Windows Phone 7???: ????????????????????????

    Read the article

  • How are events in games handled?

    - by Alex
    In may games that I have played, I have seen events being triggered, such as when you walk into a certain land area while holding a specific object, it will trigger a special creature to spawn. I was wondering, how do games deal with events such as this? Not in a specific game, but in general among games. The first thought I had was that each place has a hard-coded set of events that it will call when something happens there. However, that would be too inefficient to maintain, as when something new is added, that would require modification of every part of the game that would potentially cause the event to be called. Next up, I had the idea of maybe how GUI programming works. In all of the GUI programming I've done, you create a component and a callback function, or a listener. Then, when the user interacts when the button, the callback function is called, allowing you to do something with it. So, I was thinking that in terms of a game, when a land area gets loaded the game loops over a list of all events, creating instances of them and calling public methods to bind them to the current scene. The events themselves then handle what scene it is, and if it is a scene that pertains to the event, will call the public method of the scene to bind the event to an action. Then, when the action takes place, the scene would call all events that are bound to that action. However, I'm sure that's not how games would operate either, as that would require a lot of creating of events all the time. So how to video games handle events, are either of those methods correct, or is it something completely different?

    Read the article

  • Announcing the MOS WCI "Community"

    - by brian.harrison
    The WCI Technical Support team are please to announce the launch of the long awaited WCI Support Community on My Oracle Support (MOS) "Community". Users can navigate to this "first stop" for WebCenter Interaction information by logging on to following this link: WCI Community (Note that this requires a valid login credential to the My Oracle Support tool). In this community you'll find a product related discussion forum moderated by Oracle WebCenter Interaction support engineers, recommended tips and tricks, links to knowledge base articles and best practices for setting up and administering up your environment. We hope you'll take a minute to have a look through the community. If you have a question about WebCenter Interaction, a comment or a suggestion regarding the content, please feel free to post it to the forum and someone will respond to your request. Think of the forum here as another method to communicate directly with the WCI Technical Support team for questions and answers to simple WCI support topics. The forum is moderated by WCI Technical Support engineers directly and we hope it will help you avoid the need to log support incidents for less complex support related questions. We encourage all of our customers, both internal and external, to participate in the forums discussions, sharing information, knowledge, best practices and in the effort to help us build a vital and vibrant "home base" for WCI users on the My Oracle Support tool. Thank you for visiting! The WebCenter Interaction Support Community Moderator Team

    Read the article

  • 2D Tile Map files for Platformer, JSON or DB?

    - by Stephen Tierney
    I'm developing a 2D platformer with some uni friends. We've based it upon the XNA Platformer Starter Kit which uses .txt files to store the tile map. While this is simple it does not give us enough control and flexibility with level design. Some examples: for multiple layers of content multiple files are required, each object is fixed onto the grid, doesn't allow for rotation of objects, limited number of characters etc. So I'm doing some research into how to store the level data and map file. Reasoning for DB: From my perspective I see less redundancy of data using a database to store the tile data. Tiles in the same x,y position with the same characteristics can be reused from level to level. It seems like it would simple enough to write a method to retrieve all the tiles that are used in a particular level from the database. Reasoning for JSON: Visually editable files, changes can be tracked via SVN a lot easier. But there is repeated content. Do either have any drawbacks (load times, access times, memory etc) compared to the other? And what is commonly used in the industry? Currently the file looks like this: .................... .................... .................... .................... .................... .................... .................... .........GGG........ .........###........ .................... ....GGG.......GGG... ....###.......###... .................... .1................X. #################### 1 - Player start point, X - Level Exit, . - Empty space, # - Platform, G - Gem

    Read the article

  • How to solve Bumblebee/Nvidia Optimus issues with kernel 3.4 (works perfectly under 3.2)

    - by theJimy
    I installed Ubuntu and could setup to utilize my Intel HD 3000/Geforce GT 540M hybrid graphics perfectly with the method described here: How well do laptops with Nvidia Optimus work? Everything works fine under Kernel 3.2. Now I wanted to upgrade though to Kernel 3.4, as it brings many improvements, especially in saving battery life (ie. Intel RC6)... at least from what I heard. While I had no issues installing the 3.4 Kernel under Ubuntu 12.04 and everything so far runs fine, Bumblebee causes issues under kernel 3.4. When trying to run commands like optirun, lsmod (or similar kernel tools) these just lock up and never return. The Bumblebee developers seem to refuse to help with mainline kernels (as seen here: https://github.com/Bumblebee-Project/bbswitch/issues/17 ). Does anyone know, how to solve this issue? Could I solve this probably, by compiling the kernel and/or Bumblebee against the kernels sources myself and having a Ubuntu-like kernel? Any other idea that might help me to solve this myself, so I could profit from the 3.4 features and Optimus, would be very appreciated.

    Read the article

  • Debugging .NET 2.0 assembly from unmanaged code in VS2010?

    - by Rick Strahl
    I’ve run into a serious snag trying to debug a .NET 2.0 assembly that is called from unmanaged code in Visual Studio 2010. I maintain a host of components that using COM interop and custom .NET runtime hosting and ever since installing Visual Studio 2010 I’ve been utterly blocked by VS 2010’s inability to apparently debug .NET 2.0 assemblies when launching through unmanaged code. Here’s what I’m actually doing (simplified scenario to demonstrate): I have a .NET 2.0 assembly that is compiled for COM Interop Compile project with .NET 2.0 target and register for COM Interop Set a breakpoint in the .NET component in one of the class methods Instantiate the .NET component via COM interop and call method The result is that the COM call works fine but the debugger never triggers on the breakpoint. If I now take that same assembly and target it at .NET 4.0 without any other changes everything works as expected – the breakpoint set in the assembly project triggers just fine. The easy answer to this problem seems to be “Just switch to .NET 4.0” but unfortunately the application and the way the runtime is actually hosted has a few complications. Specifically the runtime hosting uses .NET 2.0 hosting and apparently the only reliable way to host the .NET 4.0 runtime is to use the new hosting APIs that are provided only with .NET 4.0 (which all by itself is lame, lame, lame as once again the promise of backwards compatibility is broken once again by .NET). So for the moment I need to continue using the .NET 2.0 hosting APIs due to application requirements. I’ve been searching high and low and experimenting back and forth, posted a few questions on the MSDN forums but haven’t gotten any hints on what might be causing the apparent failure of Visual Studio 2010 to debug my .NET 2.0 assembly properly when called from un-managed code. Incidentally debugging .NET 2.0 targeted assemblies works fine when running with a managed startup application – it seems the issue is specific to the unmanaged code starting up. My particular issue is with custom runtime hosting which at first I thought was the problem. But the same issue manifests when using COM Interop against a .NET 2.0 assembly, so the hosting is probably not the issue. Curious if anybody has any ideas on what could be causing the lack of debugging in this scenario?© Rick Strahl, West Wind Technologies, 2005-2010

    Read the article

  • CodePlex Daily Summary for Tuesday, July 16, 2013

    CodePlex Daily Summary for Tuesday, July 16, 2013Popular ReleasesOsmSharp: OsmSharp v3.1.4840.2: Release notes: - Fixed issue: http://osmsharp.codeplex.com/workitem/1246 'Optimized route has zero TotalDistance' - Fixed lazy loading strategy for datasource routing. - Fixed precision bug in View. - Added unittests for instruction generation. - Added daily build nuget packages: Daily build packages at http://5.9.56.3:8080/guestAuth/app/nuget/v1/FeedService.svc/ - Fixed default direction on roundabout: roundabouts are always oneway. - Take average left+right turn. - Fixed UTF8 encoding issu...PMU Connection Tester: PMU Connection Tester v4.4.0: This is the current release build of the PMU Connection Tester, version 4.4.0 This version of the connection tester was released with openPDC 1.5 SP1 and openPDC 2.0 BETA. This application requires that .NET 4.0 already be installed on your system. Note this is the last release of the PMU Connection Tester that will built on .NET 4.0 using the TVA Code Library and the Time-series Framework. Future releases of the PMU Connection Tester will be built on .NET 4.5 (or later) using the Grid Sol...WatchersNET.SiteMap: WatchersNE​T.SiteMap 01.03.08: changes Localized Tab Name is now usedStackWalker - Walking the callstack: StackWalker - 2013-07-15: This project describes and implements the (documented) way to walk a callstack for any thread (own, other and remote). It has an abstraction layer, so the calling app does not need to know the internals. This release has some bugfixes for VC5/6.GoAgent GUI: GoAgent GUI 1.1.0 ???: ???????,??????????????,?????????????? ???????? ????: 1.1.0? ??? ?????????? ????????? ??:??cn/hk????,?????????????????????。Hosts??????google_hk。Wsus Package Publisher: Release v1.2.1307.15: Fix a bug where WPP crash if 'ShowPendingUpdates' is start with wrong credentials. Fix a bug where WPP crash if ArrivalDateAfter and ArrivalDateBefore is equal in the ComputerView. Add a filter in the ComputerView. (Thanks to NorbertFe for this feature request) Add an option, when right-clicking on a computer, you can ask for display the current logon user of the remote computer. Add an option in settings to choose if WPP ping remote computers using IPv4, IPv6 or IPv6 and, if fail, IP...Lab Of Things: vBeta1: Initial release of LoTVidCoder: 1.4.23: New in 1.4.23 Added French translation. Fixed non-x264 video encoders not sticking in video tab. New in 1.4 Updated HandBrake core to 0.9.9 Blu-ray subtitle (PGS) support Additional framerates: 30, 50, 59.94, 60 Additional sample rates: 8, 11.025, 12 and 16 kHz Additional higher bitrates for audio Same as Source Constant Framerate 24-bit FLAC encoding Added Windows Phone 8 and Apple TV 3 presets Introduced process isolation for encodes. Now if HandBrake crashes, VidCoder will ...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.96: Fix for issue #19957: EXE should output the name of the file(s) being minified. Discussion #449181: throw a Sev-2 warning when trailing commas are detected on an Array literal. Perfectly legal to do so, but the behavior ends up working differently on different browsers, so throw a cross-browser warning. Add a few more known global names for improved ES6 compatibility update Nuget package to version 2.5 and automatically add the AjaxMin.targets to your project when you update the package...Outlook 2013 Add-In: Categories and Colors: This new version has a major change in the drawing of the list items: - Using owner drawn code to format the appointments using GDI (some flickering may occur, but it looks a little bit better IMHO, with separate sections). - Added category color support (if more than one category, only one color will be shown). Here, the colors Outlook uses are slightly different than the ones available in System.Drawing, so I did a first approach matching. ;-) - Added appointment status support (to show fr...TypePipe: 1.15.2.0 (.NET 4.5): This is build 1.15.2.0 of the TypePipe for .NET 4.5. Find the complete release notes for the build here: Release Notes.re-linq: 1.15.2.0 (.NET 4.5): This is build 1.15.2.0 of re-linq for .NET 4.5. Find the complete release notes for the build here: Release Notes To use re-linq with .NET 3.5, use a 1.13.x build.Columbus Remote Desktop: 2.0 Sapphire: Added configuration settings Added update notifications Added ability to disable GPU acceleration Fixed connection bugsDataDevelop: Beta 0.6.5: Hotfix bug in Python Table.ImportAll method Updated External Libraries Fixes in Excel Exportation Modify ConnectionString refreshes the Properties Window correctlyUser Group Labs: User Group Data: 01.00.00: This release has the following updates and new features: Initial release with a minimal feature set Easy to use (just add to the social group details page) Edit common user group properties System Requirements DNN v07.00.02 or newer .Net Framework v4.0 or newerCarrotCake, an ASP.Net WebForms CMS: Binaries and PDFs - Zip Archive (v. 4.3 20130713): Product documentation and additional templates for this version is included in the zip archive, or if you want individual files, visit the http://www.carrotware.com website. Templates, in addition to those found in the download, can be downloaded individually from the website as well. If you are coming from earlier versions, make a precautionary backup of your existing website files and database. When installing the update, the database update engine will create the new schema items (if you...Dalmatian Build Script: Dalmatian Build 0.1.3.0: -Minor bug fixes -Added Choose<T> and ChooseYesNo to Console objectPushover.NET: Pushover.NET - Stable Release 10 July 2013: This is the first stable release of Pushover.NET. It includes 14 overloads of the SendNotification method, giving you total flexibility of sending Pushover notifications from your client. Assembly is built to .NET 2.x so it can be called from .NET 2.x, 3.x and 4.x projects. Also available is the Test Harness. This is a small GUI that you can use to test calls to Pushover.NET's main DLL. It's almost fully functional--the sound effects haven't been fully configured so no matter what you pick ...MCEBuddy 2.x: MCEBuddy 2.3.14: 2.3.14 BETA is available through the Early Access Program.Click here https://mcebuddy2x.codeplex.com/discussions/439439 for details and to get access to Early Access Program to download latest releases. Changelog for 2.3.14 (32bit and 64bit) NEW FEATURES: 1. ENHANCEMENTS: 2. Improved eMail notifications 3. Improved metrics details 4. Support for larger history (INI) file (about 45,000 sections, each section can have about 1500 entries) BUG FIXES: 5. Fix for extracting Movie release year from...Azure Depot: Flask: Flask Version 01New Projects(??:wwqgtxx-goagent)???goagent/??wallproxy???????????????: =*greatagent*(??:wwqgtxx-goagent)= = [downloads downloads??] = =[WhyMove ???????]= ==???goagent/??wallproxy???????????????,?????goagent?wallproxy??????????,??Azure Anonymous SAS URL Generator: The Azure Anonymous SAS URL Generator allows you to generate anonymous Shared Access Signature URLs for individual Blobs stored within a given storage containerBizTalk ESB Toolkit Enterprise Library machine.config Toggler: This tool provides an instant on/off switch for the enterpriseLibrary.ConfigurationSource changes that the BizTalk ESB Toolkit makes to machine.config.Common .NET Utilities: Just some common utilities I've developed over the years that are shared among my various projects.DIY Online System: To create a Philippine Standard Online Payroll SystemDynamics Ax CipherLib: The Dynamics Ax CipherLib is a simple X.509 certificate based cipher implementation that allows to encrypt/decrypt S/MIME or XML messages with Dynamics Ax 4.0,GpsBox: We wanted to create a device which is sending the current position and any user is able to look up the current position of it.Ingenious Framework: Ingenious Framework is an easy to use, lightweight framework that hides semantic web technologies from you as a developer, but harnesses its benefits.Lab Of Things: Lab of Things is a flexible platform for experimental research that uses connected devices in homes. Orchard Deployment: Deployment and replication of content between one or more Orchard CMS instances. Move content individually, using workflows or custom subscriptions.PluginManagement.Net: .NET Plugin Loader Framework using MEF(Microsoft Extensibility Framework). It has a generic loader which loads plugin that implements generic interface IPlugin.QlikView Management API in vb.net: For more info see: http://community.qlikview.com/docs/DOC-3647 a conversion from c# to vb.net Recover Deleted Emails Microsoft Exchange Server: Get an effective way to recover deleted emails from Exchange server and browse it with MS Outlook or MS Exchange server depending upon the users requirement.Shindo's Race System: Special Edition: Shindo's Race System: Special EditionSP Solution Builder: Simple summary of project will be added latertestfailure0716: testTFS Build Custom Activities: TFS Build Custom extensions offers you a list of TFS build activities and a WorkFlow to use them.Upida.net: If you use Asp.net MVC and any NHibernate, than Upida is for you. Upida favors knockout.js. You can download example code, implemented with knockout.js.

    Read the article

  • DTLoggedExec 1.0 Stable Released!

    - by Davide Mauri
    After serveral years of development I’ve finally released the first non-beta version of DTLoggedExec! I’m now very confident that the product is stable and solid and has all the feature that are important to have (at least for me). DTLoggedExec 1.0 http://dtloggedexec.codeplex.com/releases/view/44689 Here’s the release notes: FIRST NON-BETA RELEASE! :) Code cleaned up Added SetPackageInfo method to ILogProvider interface to make easier future improvements Deprecated the arguments 'ProfileDataFlow', 'ProfilePath', 'ProfileFileName' Added the new argument 'ProfileDataFlowFileName' that replaces the old 'ProfileDataFlow', 'ProfilePath', 'ProfileFileName' arguments Updated database scripts to support new reports Split releases in three different packages for easier maintenance and updates: DTLoggedExec Executable, Samples & Reports Fixed Issue #25738 (http://dtloggedexec.codeplex.com/WorkItem/View.aspx?WorkItemId=25738) Fixed Issue #26479 (http://dtloggedexec.codeplex.com/WorkItem/View.aspx?WorkItemId=26479) To make things easier to maintain I’ve divided the original package in three different releases. One is the DTLoggedExec executable; samples and reports are now available in separate packages so that I can update them more frequently without having to touch the engine. Source code of everything is available through Source Code Control: http://dtloggedexec.codeplex.com/SourceControl/list/changesets As usual, comments and feebacks are more than welcome! (Just use Codeplex, please, so it will be easier for me to keep track of requests and issues) Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Different methods of ammo resupply

    - by Chris Mantle
    I'm writing a small game at the moment. Presently, I have one or two design elements that aren't locked down yet, and I wanted to ask for input on one of these. For dramatic effect, the player's character in my game is immobilised, alone and has a supposedly limited amount of ammo for their weapons. However, I would like to periodically resupply the player with ammo (for the purpose of balancing the level of difficulty and to allow the player to continue if they're doing well). I'm trying to think of a method of resupply that's different to the more familiar strategies of making ammo magically appear or having the antagonists drop some when they die. I'd like to emphasise the notion of the player's isolation as much as possible, and finding a way of 'sneaking' ammo to the player without removing too much of that emphasis is basically what I'm trying to think of (it's definitely a valid argument that resupplying the player removes it anyway) I have considered a sort of simple in-game 'store', where kills get you points that you can spend on ammo for your favourite weapon. This might work well, and may also be good for supporting a simple micro-transaction business model within the game. However, you'd have to pause the game often to make purchases, which would interrupt the action, and it works against the notion of isolation. Any thoughts?

    Read the article

  • Oracle Tutor: Learn Tutor in the comfort of your own home or office

    - by emily.chorba(at)oracle.com
    The primary challenge for companies faced with documenting policies and procedures is to realize that they can do this documentation in-house, with existing resources, using Oracle Tutor. Procedure documentation is a critical success component for supporting corporate governance or other regulatory compliance initiatives and when implementing or upgrading to a new business application. There are over 1000 Oracle Tutor customers worldwide that have used Tutor to create, distribute, and maintain their business procedures. This is easily accomplished because of Tutor's: Ease of use by those who have to write procedures (Microsoft Word based authoring) Ease of company-wide implementation (complex document management activities are centralized) Ease of use by workers who have to follow the procedures (play script format)Ease of access by remote workers (web-enabled) Oracle University is offering Live Virtual Tutor classes! The class lasts four days, starts on Tuesday and finishes on Friday. This course is an introduction to the Oracle Tutor suite of products. It focuses on the Policy and Procedure writing feature set of the Tutor applications. Participants will learn about writing procedures and maintaining these particular process document types, all using the Tutor method. The next three classes are scheduled for: April 19 - 22 May 31 - June 3 July 5 - 8 You will learn to: Write procedures Create procedure Flowcharts Write support documents Create Impact Analysis Reports Create Role-base Employee Manuals Deploy online Employee Manuals on an Intranet Enjoy learning Tutor in your local environment. Start the sign up process from this link Learn More For more information about Tutor, visit Oracle.com or the Tutor Blog. Post your questions at the Tutor Forum. Emily Chorba Principle Product Manager Oracle Tutor & BPM

    Read the article

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • Speaking at MySQL Connect 2012

    - by jonathonc
    At the end of September, the MySQL Connect 2012 conference will be held as part of Oracle OpenWorld in San Francisco. MySQL Connect is a two day event that allows attendees to focus on MySQL at a technical depth with presentations and interaction with many of the MySQL developers, engineers and other knowledgeable staff. There is also a range a international speakers to give broader knowledge to the presentations. I am presenting a Hands-On Lab on Sunday 30th September 16:15 - 17:15 entitled HOL10474 - MySQL Security: Authentication and Auditing. The sessions goes through an introduction to the plugin API and how it can help expand the capabilities of MySQL. Since it is a hands-on lab, attendees will use practical examples of implementing simple plugins to get a start in developing their own plugins. These plugin examples are based around implementing PAM authentication and how it can be utilized to offer greater security for the MySQL Server. Once the authentication has been tested, a method to monitor it will be implemented using the auditing API and logging different events as they happen in the service. There is a total of 78 sessions at MySQL Connect 2012 with a great range of speakers. Hope to see you there!

    Read the article

  • Difference between Detach/Attach and Restore/BackUp a DB

    - by SAMIR BHOGAYTA
    Transact-SQL BACKUP/RESTORE is the normal method for database backup and recovery. Databases can be backed up while online. The backup file size is usually smaller than the database files since only used pages are backed up. Also, in the FULL or BULK_LOGGED recovery model, you can reduce potential data loss by performing transaction log backups. Detaching a database removes the database from SQL Server while leaving the physical database files intact. This allows you to rename or move the physical files and then re-attach. Although one could perform cold backups using this technique, detach/attach isn't really intended to be used as a backup/recovery process. Commonly it is recommended that you use BACKUP/RESTORE for disaster recovery (DR) scenario and copying data from one location to another. But this is not absolute, sometimes for a very large database, if you want to move it from one location to another, backup/restore process may spend a lot of time which you do not like, in this case, detaching/attaching a database is a better way since you can attach a workable database very fast. But you need to aware that detaching a database will bring it offline for a short time and detaching/attaching does not provide DR function. For more information about detaching and attaching databases, you can refer to: Detaching and Attaching Databases http://technet.microsoft.com/en-us/library/ms190794.aspx

    Read the article

  • How to set TextureFilter to Point to make example Bloom filter work?

    - by Mr Bell
    I have simple app that renders some particles and now I am trying to apply the bloom shader from the xna samplers ( http://create.msdn.com/en-US/education/catalog/sample/bloom ) to it, but I am running into this exception: "XNA Framework HiDef profile requires TextureFilter to be Point when using texture format Vector4." When the BloomComponent tries to end the sprite batch in the DrawFullscreenQuad method: spriteBatch.Begin(0, BlendState.Opaque, SamplerState.PointWrap, null, null, effect); spriteBatch.Draw(texture, new Rectangle(0, 0, width, height), Color.White); spriteBatch.End(); //<------- Exception thrown here It seems to be related to the pixel shaders that I am using to animate the particle. In a nutshell, I have a texture2d in vector4 format that holds particle positions, and another one for velocities. Here is a snippet from that area: GraphicsDevice.SetRenderTarget(tempRenderTarget); animationEffect.CurrentTechnique = animationEffect.Techniques[technique]; spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque, SamplerState.PointWrap, DepthStencilState.DepthRead, RasterizerState.CullNone, animationEffect); spriteBatch.Draw(randomValues, new Rectangle(0, 0, width, height), Color.White); spriteBatch.End(); What I comment out the code that calls the particle animation pixel shaders the bloom component runs fine. Is there some state that I need to reset to make the bloom work?

    Read the article

  • jQuery + Perl CGI to vb.net transition

    - by user1257458
    I've been developing oracle database-heavy "web applications" forever by building my html by hand, adding some jquery to handle ajax requests (html inserts for forms processing etc), and always did my server side stuff in perl cgi. I really love how easy it is to read some form input, execute some select statements through dbi (SO EASY), and generate HTML to be inserted by the jquery request. That's a web application to me. However, my new boss builds everything in visual studio 2010, vb.net, usually webforms. So, for work reasons, I now need to start developing in vb.net so it can be collectively maintained, and I'm just seeking advice on where to start learning/how to approach this. I know I could at least learn ASP.net and VB.net, and create a webform, have it read parameters, return HTML, etc. which would allow me to use my previously written HTML and client-side scripts (jQuery). Although- since we're moving heavily to mobile applications I really need to reduce client-side processing load. Is there any advantage to my boss' method? Thanks a ton.

    Read the article

  • Camera wont stay behind model after pitch, then rotation

    - by ChocoMan
    I have a camera position behind a model. Currently, if I push the left thumbstick making my model move forward, backward, or strafe, the camera stays with the model. If I push the right thumbstick left or right, the model rotates in those directions fine along with the camera rotating while maintaining its position relatively behind the model. But when I pitch the model up or down, then rotate the model afterwards, the camera moves slightly rotates in a clock-like fashion behind the model. If I do a few rotations of the model and try to pitch the camera, the camera will eventually be looking at the side, then eventually the front of the model while also rotating in a clock-like fashion. My question is, how do I keep the camera to pitch up and down behind the model no matter how much the model has rotated? Here is what I got: // Rotates model and pitches camera on its own axis public void modelRotMovement(GamePadState pController) { // Rotates Camera with model Yaw = pController.ThumbSticks.Right.X * MathHelper.ToRadians(angularSpeed); // Pitches Camera around model Pitch = pController.ThumbSticks.Right.Y * MathHelper.ToRadians(angularSpeed); AddRotation = Quaternion.CreateFromYawPitchRoll(Yaw, 0, 0); ModelLoad.MRotation *= AddRotation; MOrientation = Matrix.CreateFromQuaternion(ModelLoad.MRotation); } // Orbit (yaw) Camera around with model (only seeing back of model) public void cameraYaw(Vector3 axisYaw, float yaw) { ModelLoad.CameraPos = Vector3.Transform(ModelLoad.CameraPos - ModelLoad.camTarget, Matrix.CreateFromAxisAngle(axisYaw, yaw)) + ModelLoad.camTarget; } // Raise camera above or below model's shoulders public void cameraPitch(Vector3 axisPitch, float pitch) { ModelLoad.CameraPos = Vector3.Transform(ModelLoad.CameraPos - ModelLoad.camTarget, Matrix.CreateFromAxisAngle(axisPitch, pitch)) + ModelLoad.camTarget; } // Call in update method public void updateCamera() { cameraYaw(Vector3.Up, Yaw); cameraPitch(Vector3.Right, Pitch); } NOTE: I tried to use addPitch just like addRotation but it didn't work...

    Read the article

  • Where'd My Data Go? (and/or...How Do I Get Rid of It?)

    - by David Paquette
    Want to get a better idea of how cascade deletes work in Entity Framework Code First scenarios? Want to see it in action? Stick with us as we quickly demystify what happens when you tell your data context to nuke a parent entity. This post is authored by Calgary .NET User Group Leader David Paquette with help from Microsoft MVP in Asp.Net James Chambers. We got to spend a great week back in March at Prairie Dev Con West, chalk full of sessions, presentations, workshops, conversations and, of course, questions.  One of the questions that came up during my session: "How does Entity Framework Code First deal with cascading deletes?". James and I had different thoughts on what the default was, if it was different from SQL server, if it was the same as EF proper and if there was a way to override whatever the default was.  So we built a set of examples and figured out that the answer is simple: it depends.  (Download Samples) Consider the example of a hockey league. You have several different entities in the league including games, teams that play the games and players that make up the teams. Each team also has a mascot.  If you delete a team, we need a couple of things to happen: The team, games and mascot will be deleted, and The players for that team will remain in the league (and therefore the database) but they should no longer be assigned to a team. So, let's make this start to come together with a look at the default behaviour in SQL when using an EDMX-driven project. The Reference – Understanding EF's Behaviour with an EDMX/DB First Approach First up let’s take a look at the DB first approach.  In the database, we defined 4 tables: Teams, Players, Mascots, and Games.  We also defined 4 foreign keys as follows: Players.Team_Id (NULL) –> Teams.Id Mascots.Id (NOT NULL) –> Teams.Id (ON DELETE CASCADE) Games.HomeTeam_Id (NOT NULL) –> Teams.Id Games.AwayTeam_Id (NOT NULL) –> Teams.Id Note that by specifying ON DELETE CASCADE for the Mascots –> Teams foreign key, the database will automatically delete the team’s mascot when the team is deleted.  While we want the same behaviour for the Games –> Teams foreign keys, it is not possible to accomplish this using ON DELETE CASCADE in SQL Server.  Specifying a ON DELETE CASCADE on these foreign keys would cause a circular reference error: The series of cascading referential actions triggered by a single DELETE or UPDATE must form a tree that contains no circular references. No table can appear more than one time in the list of all cascading referential actions that result from the DELETE or UPDATE – MSDN When we create an entity data model from the above database, we get the following:   In order to get the Games to be deleted when the Team is deleted, we need to specify End1 OnDelete action of Cascade for the HomeGames and AwayGames associations.   Now, we have an Entity Data Model that accomplishes what we set out to do.  One caveat here is that Entity Framework will only properly handle the cascading delete when the the players and games for the team have been loaded into memory.  For a more detailed look at Cascade Delete in EF Database First, take a look at this blog post by Alex James.   Building The Same Sample with EF Code First Next, we're going to build up the model with the code first approach.  EF Code First is defined on the Ado.Net team blog as such: Code First allows you to define your model using C# or VB.Net classes, optionally additional configuration can be performed using attributes on your classes and properties or by using a Fluent API. Your model can be used to generate a database schema or to map to an existing database. Entity Framework Code First follows some conventions to determine when to cascade delete on a relationship.  More details can be found on MSDN: If a foreign key on the dependent entity is not nullable, then Code First sets cascade delete on the relationship. If a foreign key on the dependent entity is nullable, Code First does not set cascade delete on the relationship, and when the principal is deleted the foreign key will be set to null. The multiplicity and cascade delete behavior detected by convention can be overridden by using the fluent API. For more information, see Configuring Relationships with Fluent API (Code First). Our DbContext consists of 4 DbSets: public DbSet<Team> Teams { get; set; } public DbSet<Player> Players { get; set; } public DbSet<Mascot> Mascots { get; set; } public DbSet<Game> Games { get; set; } When we set the Mascot –> Team relationship to required, Entity Framework will automatically delete the Mascot when the Team is deleted.  This can be done either using the [Required] data annotation attribute, or by overriding the OnModelCreating method of your DbContext and using the fluent API. Data Annotations: public class Mascot { public int Id { get; set; } public string Name { get; set; } [Required] public virtual Team Team { get; set; } } Fluent API: protected override void OnModelCreating(DbModelBuilder modelBuilder) { modelBuilder.Entity<Mascot>().HasRequired(m => m.Team); } The Player –> Team relationship is automatically handled by the Code First conventions. When a Team is deleted, the Team property for all the players on that team will be set to null.  No additional configuration is required, however all the Player entities must be loaded into memory for the cascading to work properly. The Game –> Team relationship causes some grief in our Code First example.  If we try setting the HomeTeam and AwayTeam relationships to required, Entity Framework will attempt to set On Cascade Delete for the HomeTeam and AwayTeam foreign keys when creating the database tables.  As we saw in the database first example, this causes a circular reference error and throws the following SqlException: Introducing FOREIGN KEY constraint 'FK_Games_Teams_AwayTeam_Id' on table 'Games' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints. Could not create constraint. To solve this problem, we need to disable the default cascade delete behaviour using the fluent API: protected override void OnModelCreating(DbModelBuilder modelBuilder) { modelBuilder.Entity<Mascot>().HasRequired(m => m.Team); modelBuilder.Entity<Team>() .HasMany(t => t.HomeGames) .WithRequired(g => g.HomeTeam) .WillCascadeOnDelete(false); modelBuilder.Entity<Team>() .HasMany(t => t.AwayGames) .WithRequired(g => g.AwayTeam) .WillCascadeOnDelete(false); base.OnModelCreating(modelBuilder); } Unfortunately, this means we need to manually manage the cascade delete behaviour.  When a Team is deleted, we need to manually delete all the home and away Games for that Team. foreach (Game awayGame in jets.AwayGames.ToArray()) { entities.Games.Remove(awayGame); } foreach (Game homeGame in homeGames) { entities.Games.Remove(homeGame); } entities.Teams.Remove(jets); entities.SaveChanges();   Overriding the Defaults – When and How To As you have seen, the default behaviour of Entity Framework Code First can be overridden using the fluent API.  This can be done by overriding the OnModelCreating method of your DbContext, or by creating separate model override files for each entity.  More information is available on MSDN.   Going Further These were simple examples but they helped us illustrate a couple of points. First of all, we were able to demonstrate the default behaviour of Entity Framework when dealing with cascading deletes, specifically how entity relationships affect the outcome. Secondly, we showed you how to modify the code and control the behaviour to get the outcome you're looking for. Finally, we showed you how easy it is to explore this kind of thing, and we're hoping that you get a chance to experiment even further. For example, did you know that: Entity Framework Code First also works seamlessly with SQL Azure (MSDN) Database creation defaults can be overridden using a variety of IDatabaseInitializers  (Understanding Database Initializers) You can use Code Based migrations to manage database upgrades as your model continues to evolve (MSDN) Next Steps There's no time like the present to start the learning, so here's what you need to do: Get up-to-date in Visual Studio 2010 (VS2010 | SP1) or Visual Studio 2012 (VS2012) Build yourself a project to try these concepts out (or download the sample project) Get into the community and ask questions! There are a ton of great resources out there and community members willing to help you out (like these two guys!). Good luck! About the Authors David Paquette works as a lead developer at P2 Energy Solutions in Calgary, Alberta where he builds commercial software products for the energy industry.  Outside of work, David enjoys outdoor camping, fishing, and skiing. David is also active in the software community giving presentations both locally and at conferences. David also serves as the President of Calgary .Net User Group. James Chambers crafts software awesomeness with an incredible team at LogiSense Corp, based in Cambridge, Ontario. A husband, father and humanitarian, he is currently residing in the province of Manitoba where he resists the urge to cheer for the Jets and maintains he allegiance to the Calgary Flames. When he's not active with the family, outdoors or volunteering, you can find James speaking at conferences and user groups across the country about web development and related technologies.

    Read the article

  • New PeopleSoft Applications Search

    - by Matthew Haavisto
    As you may have seen from the PeopleTools 8.52 Release Value Proposition , PeopleTools intends to introduce a new search capability in release 8.52. We believe this feature will not only improve the ability of users to find content, but will fundamentally change the way people navigate around the PeopleSoft ecosystem. PeopleSoft applications will be delivering this new search in coming releases and feature packs. PeopleSoft Application Search is actually a framework—a group of features that provides an improved means of searching for a variety of content across PeopleSoft applications. From a user experience perspective, the new search offers a powerful, keyword-based search presented in a familiar, intuitive user experience. Rather than browsing through long menu hierarchies to find a page, data item, or transaction, users can use PeopleSoft Application Search to directly navigate to desired locations. We envision this to be similar to how people navigate across the internet. This capability may reduce or even eliminate the need to navigate PeopleSoft applications using the existing application menu system (though menus will still be available to people that prefer that method). The new search will be available at any point in an application and can be configured to span multiple PeopleSoft applications. It enables users to initiate transactions or navigate to key information without using the PeopleSoft application menus. In addition, filters and facets will enable people to narrow their search results sets, making it easier to identify and navigate to desired application content. Action menus are embedded directly in the search results, allowing users to navigate straight to specific related transactions – pre-populated with the selected search results data. PeopleSoft Applications Search framework uses Oracle’s Secure Enterprise Search as its search engine. Most Customers will benefit from the new search when it is delivered with applications. However, customers can start deploying it after a Tools-only upgrade. In this case, however, customers would have to create their own indices and implement security.

    Read the article

< Previous Page | 704 705 706 707 708 709 710 711 712 713 714 715  | Next Page >