Search Results

Search found 20556 results on 823 pages for 'image viewer'.

Page 613/823 | < Previous Page | 609 610 611 612 613 614 615 616 617 618 619 620  | Next Page >

  • Developing Ext JS Charts in NetBeans IDE

    - by Geertjan
    I took my first tentative steps into the world of Ext JS charts today, in NetBeans IDE 7.4. Click to enlarge the image. I will make a screencast soon showing how charts such as the above can be created with NetBeans IDE and Ext JS. Setting up Ext JS is easy in NetBeans IDE because there's a JavaScript library browser, by means of which I can browse for the Ext JS libraries that I need and then NetBeans IDE sets up the project for me. The JavaScript code shown above comes directly from here: http://www.quizzpot.com/courses/learning-ext-js-3/articles/chart-series The index.html is as follows: <html> <head> <title>TODO supply a title</title> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width"> <link rel="stylesheet" href="js/libs/extjs/resources/css/ext-all.css"/> <script src="js/libs/ext-core/ext-core.js"></script> <script src="js/libs/extjs/adapter/ext/ext-base-debug.js"></script> <script src="js/libs/extjs/ext-all-debug.js"></script> <script src="app.js"></script> </head> <body> </body> </html> More info on Ext JS: http://docs.sencha.com/extjs/4.1.3/ By the way, quite a few other articles are out there on Ext JS and NetBeans IDE, such as these, which I will be learning from during the coming days: http://netbeans.dzone.com/extjs-rest-netbeans http://netbeans.dzone.com/articles/create-your-first-extjs-4 http://netbeans.dzone.com/articles/mixing-extjs-json-p-and-java

    Read the article

  • Writing an ASP.Net Web based TFS Client

    - by Glav
    So one of the things I needed to do was write an ASP.Net MVC based application for our senior execs to manage a set of arbitrary attributes against stories, bugs etc to be able to attribute whether the item was related to Research and Development, and if so, what kind. We are using TFS Azure and don’t have the option of custom templates. I have decided on using a string based field within the template that is not very visible and which we don’t use to write a small set of custom which will determine the research and development association. However, this string munging on the field is not very user friendly so we need a simple tool that can display attributes against items in a simple dropdown list or something similar. Enter a custom web app that accesses our TFS items in Azure (Note: We are also using Visual Studio 2012) Now TFS Azure uses your Live ID and it is not really possible to easily do this in a server based app where no interaction is available. Even if you capture the Live ID credentials yourself and try to submit them to TFS Azure, it wont work. Bottom line is that it is not straightforward nor obvious what you have to do. In fact, it is a real pain to find and there are some answers out there which don’t appear to be answers at all given they didn’t work in my scenario. So for anyone else who wants to do this, here is a simple breakdown on what you have to do: Go here and get the “TFS Service Credential Viewer”. Install it, run it and connect to your TFS instance in azure and create a service account. Note the username and password exactly as it presents it to you. This is the magic identity that will allow unattended, programmatic access. Without this step, don’t bother trying to do anything else. In your MVC app, reference the following assemblies from “C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\ReferenceAssemblies\v2.0”: Microsoft.TeamFoundation.Client.dll Microsoft.TeamFoundation.Common.dll Microsoft.TeamFoundation.VersionControl.Client.dll Microsoft.TeamFoundation.VersionControl.Common.dll Microsoft.TeamFoundation.WorkItemTracking.Client.DataStoreLoader.dll Microsoft.TeamFoundation.WorkItemTracking.Client.dll Microsoft.TeamFoundation.WorkItemTracking.Common.dll If hosting this in Internet Information Server, for the application pool this app runs under, you will need to enable 32 Bit support. You also have to allow the TFS client assemblies to store a cache of files on your system. If you don’t do this, you will authenticate fine, but then get an exception saying that it is unable to access the cache at some directory path when you query work items. You can set this up by adding the following to your web.config, in the <appSettings> element as shown below: <appSettings> <!-- Add reference to TFS Client Cache --> <add key="WorkItemTrackingCacheRoot" value="C:\windows\temp" /> </appSettings> With all that in place, you can write the following code: var token = new Microsoft.TeamFoundation.Client.SimpleWebTokenCredential("{you-service-account-name", "{your-service-acct-password}"); var clientCreds = new Microsoft.TeamFoundation.Client.TfsClientCredentials(token); var currentCollection = new TfsTeamProjectCollection(new Uri(“https://{yourdomain}.visualstudio.com/defaultcollection”), clientCreds); TfsConfigurationServercurrentCollection.EnsureAuthenticated(); In the above code, not the URL contains the “defaultcollection” at the end of the URL. Obviously replace {yourdomain} with whatever is defined for your TFS in Azure instance. In addition, make sure the service user account and password that was generated in the first step is substituted in here. Note: If something is not right, the “EnsureAuthenticated()” call will throw an exception with the message being you are not authorised. If you forget the “defaultcollection” on the URL, it will still fail but with a message saying you are not authorised. That is, a similar but different exception message. And that is it. You can then query the collection using something like: var service = currentCollection.GetService<WorkItemStore>(); var proj = service.Projects[0]; var allQueries = proj.StoredQueries; for (int qcnt = 0; qcnt < allQueries.Count; qcnt++) {     var query = allQueries[qcnt];     var queryDesc = string.format(“Query found named: {0}”,query.Name); } You get the idea. If you search around, you will find references to the ServiceIdentityCredentialProvider which is referenced in this article. I had no luck with this method and it all looked too hard since it required an extra KB article and other magic sauce. So I hope that helps. This article certainly would have helped me save a boat load of time and frustration.

    Read the article

  • Unable to fix broken packages with sudo apt-get install -f

    - by Bob
    Here's my result, of sudo apt-get install -f. i have Ran it twice and got negative result. I believe there is an error at "error in Version string '0:3.6.1-dates for language English Translation data updates for all supported packages for: English" This same statement "error in Version string, caused me three days of attempting to download version 12.04. There is a bug report concerning the quoted text as well. Is there anyway to download the version without the language packs, why would I corrupt version 11.10? Also, when attempting to download Synaptic using sudo apt-get install synaptic, I get the same error message. Again I point out the initial download problems and the same error message receipt. Thanks b0b@b0b-IC780M-A:~$ sudo apt-get install -f [sudo] password for b0b: Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 298 not upgraded. b0b@b0b-IC780M-A:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 298 not upgraded. b0b@b0b-IC780M-A:~$ sudo apt-get upgrade install Reading package lists... Done Building dependency tree Reading state information... Done The following packages have been kept back: linux-headers-generic software-center The following packages will be upgraded: accountsservice acpi-support acpid aisleriot alsa-utils app-install-data-partner appmenu-qt apport apport-gtk apt-transport-https apt-utils aptdaemon aptdaemon-data apturl apturl-common banshee banshee-extension-soundmenu banshee-extension-ubuntuonemusicstore baobab bind9-host binutils bluez-alsa bluez-cups bluez-gstreamer brasero brasero-cdrkit brasero-common checkbox checkbox-gtk command-not-found command-not-found-data compiz compiz-core compiz-gnome compiz-plugins-default compiz-plugins-main-default cups cups-bsd cups-client cups-common cups-ppdc deja-dup desktop-file-utils dnsutils empathy empathy-common eog evince evince-common evolution-data-server evolution-data-server-common file-roller firefox firefox-globalmenu firefox-gnome-support gbrainy gcalctool gconf2 gconf2-common gedit gedit-common ghostscript ghostscript-cups ghostscript-x gir1.2-atspi-2.0 gir1.2-gconf-2.0 gir1.2-gnomebluetooth-1.0 gir1.2-gtk-3.0 gir1.2-gtksource-3.0 gir1.2-totem-1.0 gir1.2-unity-4.0 gir1.2-webkit-3.0 gnome-accessibility-themes gnome-bluetooth gnome-control-center gnome-control-center-data gnome-desktop3-data gnome-font-viewer gnome-games-common gnome-icon-theme gnome-mahjongg gnome-online-accounts gnome-orca gnome-power-manager gnome-screenshot gnome-search-tool gnome-session gnome-session-bin gnome-session-canberra gnome-session-common gnome-settings-daemon gnome-sudoku gnome-system-log gnome-system-monitor gnome-utils-common gnomine gstreamer0.10-gconf gstreamer0.10-plugins-good gstreamer0.10-pulseaudio gvfs gvfs-backends gvfs-bin gvfs-fuse gwibber gwibber-service gwibber-service-facebook gwibber-service-identica gwibber-service-twitter hpijs hplip hplip-cups hplip-data indicator-datetime indicator-session indicator-sound isc-dhcp-client isc-dhcp-common jockey-common jockey-gtk language-selector-common language-selector-gnome libaccountsservice0 libapt-inst1.3 libarchive1 libasound2-plugins libatk-adaptor libbind9-60 libbrasero-media3-1 libcamel-1.2-29 libcanberra-gtk-module libcanberra-gtk0 libcanberra-gtk3-0 libcanberra-gtk3-module libcanberra-pulse libcanberra0 libdecoration0 libdns69 libebackend-1.2-1 libebook1.2-12 libecal1.2-10 libedata-book-1.2-11 libedata-cal-1.2-13 libedataserver1.2-15 libedataserverui-3.0-1 libevince3-3 libgconf2-4 libgnome-bluetooth8 libgnome-control-center1 libgnome-desktop-3-2 libgoa-1.0-0 libgrip0 libgs9 libgs9-common libgtk-3-bin libgtksourceview-3.0-0 libgtksourceview-3.0-common libgweather-3-0 libgweather-common libgwibber-gtk2 libgwibber2 libhpmud0 libimobiledevice2 libisc62 libisccc60 libisccfg62 libjasper1 liblightdm-gobject-1-0 liblwres60 libmetacity-private0 libmission-control-plugins0 libmono-zeroconf1.0-cil libnautilus-extension1 libnm-glib-vpn1 libnm-glib4 libnm-util2 libnotify0.4-cil libnux-1.0-0 libnux-1.0-common libpam-gnome-keyring libreoffice-emailmerge libreoffice-style-human libsane-hpaio libsmbclient libsnmp-base libsnmp15 libsyncdaemon-1.0-1 libt1-5 libtotem0 libubuntuone-1.0-1 libubuntuone1.0-cil libunity-2d-private0 libunity-core-4.0-4 libunity6 libusbmuxd1 libwbclient0 libwebkitgtk-1.0-0 libwebkitgtk-1.0-common libwebkitgtk-3.0-0 libwebkitgtk-3.0-common libxml2 linux-generic linux-image-generic metacity metacity-common mobile-broadband-provider-info modemmanager mousetweaks multiarch-support nautilus nautilus-data nautilus-sendto-empathy network-manager nux-tools onboard openssl pulseaudio pulseaudio-esound-compat pulseaudio-module-bluetooth pulseaudio-module-gconf pulseaudio-module-x11 pulseaudio-utils python-apport python-aptdaemon python-aptdaemon-gtk python-aptdaemon.gtk3widgets python-aptdaemon.gtkwidgets python-brlapi python-cups python-cupshelpers python-gobject-cairo python-httplib2 python-launchpadlib python-libxml2 python-pam python-papyon python-pkg-resources python-problem-report python-pyatspi2 python-software-properties python-ubuntuone-client python-ubuntuone-storageprotocol samba-common samba-common-bin seahorse shotwell simple-scan smbclient sni-qt software-properties-common software-properties-gtk sudo system-config-printer-common system-config-printer-gnome system-config-printer-udev telepathy-indicator telepathy-mission-control-5 thunderbird thunderbird-globalmenu thunderbird-gnome-support tomboy totem totem-common totem-mozilla totem-plugins ttf-opensymbol ubuntu-desktop ubuntu-minimal ubuntu-standard ubuntuone-client ubuntuone-client-gnome ubuntuone-couch unity unity-2d unity-2d-launcher unity-2d-panel unity-2d-places unity-2d-spread unity-common unity-lens-applications unity-services update-manager update-manager-core update-notifier update-notifier-common usbmuxd vim-common vim-tiny vinagre vino xorg xserver-xorg xserver-xorg-input-all xserver-xorg-video-all xserver-xorg-video-intel xserver-xorg-video-openchrome xul-ext-ubufox 296 upgraded, 0 newly installed, 0 to remove and 2 not upgraded. Need to get 0 B/159 MB of archives. After this operation, 10.1 MB of additional disk space will be used. Do you want to continue [Y/n]? y Extracting templates from packages: 100% Preconfiguring packages ... dpkg: error: parsing file '/var/lib/dpkg/available' near line 4131 package 'python-zope.interface': error in Version string '0:3.6.1-dates for language English Translation data updates for all supported packages for: English . language-pack-en-base provides the bulk of translation data and is updated only seldom. This package provides frequent translation updates.': version string has embedded spaces E: Sub-process /usr/bin/dpkg returned an error code (2) b0b@b0b-IC780M-A:~$

    Read the article

  • Malaysian Airlines bans kids under 12, creates separate cabins in basement of flight

    - by Gopinath
    Kids are lot of fun to watch and play as long as they don’t start crying. Once they start crying it’s tough job for parents to calm them down and for people around it’s painful to be part of it. If it happens to be on a flight, it’s a biggest annoyance one can ever experience. Especially on long journey over night flights, it’s a nightmare for passengers if couple of kids are uncontrollable. After receiving many complaints from its passengers who are disturbed by kids in flight, Malaysian Airlines decided to ban kids under 12 in their regular Economy class cabins of new Airbus A380s. Parents with under 12 years old kids are allowed only in to special kids zone created in the basement of  the multi-storied jumbo flights Airbus A380s. May be parents with under 12 kids does not appreciate this move, but the rest of travellers would be happy. Back in June 2011 Malaysian Airlines banned infants in first class of its Boeing 747-400 jets. The CEO of Malaysian Airlines defended on twitter about the decision as first classers spend pricy amount for a comfortable journey.  So if you are a parent of kids  under 12, think twice before you book tickets on Malaysian Airlines. Creative commons image courtesy: flickr/transworld

    Read the article

  • Deferred rendering with both Clockwise and CounterClockwise culling

    - by user1423893
    I have a deferred rendering system that works well with objects that appear solid and drawn using CounterClockwise culling. I have a problem with Clockwise culled objects that are supposed to represent hollow that display their inside faces only. The image below shows a CounterClockwise culled object (left) Clockwise culled object (right). The Clockwise culled object faces display what would be displayed on the CounterClockwise face. How can I get the lighting to light the inner faces for Clockwise culled objects and continue lighting the outer CounterClockwise faces as normal? My lighting method is below private void DeferredLighting(GameTime gameTime) { // Set the render target for the lights game.GraphicsDevice.SetRenderTarget(lightMap); // Clear the render target to (0, 0, 0, 0) game.GraphicsDevice.Clear(Color.Transparent); // Set the render states game.GraphicsDevice.BlendState = BlendState.Additive; game.GraphicsDevice.DepthStencilState = DepthStencilState.None; game.GraphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; // Set sampler state to Point as the Surface type requires it in XNA 4.0 game.GraphicsDevice.SamplerStates[0] = SamplerState.PointClamp; // Set the camera properties for all lights BaseLight.SetCameraProperties(game.ActiveCamera); // Draw the lights int numLights = lights.Count; for (int i = 0; i < numLights; ++i) { if (lights[i].Diffuse.W > 0f) { lights[i].Render(gameTime, ref normalMap, ref depthMap, ref sgrMap); } } // Resolve the render target game.GraphicsDevice.SetRenderTarget(null); } I have tried adjusting the render states but no combination works for both objects.

    Read the article

  • WDS - Access to network share via local user

    - by Kenny Bones
    When installing Windows 7 using WDS, a local user account is used during the set up after the main image of Win7 is installed. And I've got this application that lies on the network and not the deploymentshare itself that I want to install. But logically I get no access to that share via the local user account. Is it possible to do this somehow? Or do I have to move the Share to the Deployment share? Or possibly map the share to a separate drive or something?

    Read the article

  • Ask the Readers: Do You Use a Desktop Email Client?

    - by Jason Fitzpatrick
    Thanks to the rise of free and numerous webmail providers, there’s an entire generation of email users who have never used a desktop email client. None the less there are still many dedicated desktop client users (and reasons to be one)–are you among them? Image available as wallpaper here. Whether you’re webmail all the way, stick with your very desktop email client, or use a hybrid system, we want to hear from you. How are you reading and responding to your email? On the web? After downloading it to your dedicated client? What’s the advantages and disadvantages to the way you do things; how would you sell your email workflow to your fellow readers? Sound off in the comments and then check back in on Friday for the What You Said roundup to see how your fellow readers manage their email workflow. 6 Start Menu Replacements for Windows 8 What Is the Purpose of the “Do Not Cover This Hole” Hole on Hard Drives? How To Log Into The Desktop, Add a Start Menu, and Disable Hot Corners in Windows 8

    Read the article

  • Workaround: build FBX in XNA raise OutOfMemoryException

    - by Vitus
    If you try to add large FBX 3D model to the XNA project, and build it, you can get an OutOfMemoryException build error like following: Error    1    Building content threw OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.    at System.Collections.Generic.List`1.set_Capacity(Int32 value)    at System.Collections.Generic.List`1.EnsureCapacity(Int32 min)    at System.Collections.Generic.List`1.InsertRange(Int32 index, IEnumerable`1 collection)    at Microsoft.Xna.Framework.Content.Pipeline.Graphics.VertexChannel`1.InsertRange(Int32 index, Int32 count)    at Microsoft.Xna.Framework.Content.Pipeline.Graphics.VertexContent.InsertRange(Int32 index, IEnumerable`1 positionIndexCollection)    at Microsoft.Xna.Framework.Content.Pipeline.Graphics.MeshBuilder.AddTriangleVertex(Int32 indexIntoVertexCollection)    at Microsoft.Xna.Framework.Content.Pipeline.MeshConverter.FillNodeWithInfoFromMesh(KFbxNode* fbxNode, String name, KFbxGeometryConverter* geometryConverter)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.ProcessInformationInNode(KFbxNode* fbxNode, String name, Boolean* partOfMainSkeleton, Boolean* warnIfBoneButNotChild)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.ProcessNode(ValueType parentAbsoluteTransform, NodeContent potentialParent, KFbxNode* fbxNode, Boolean partOfMainSkeleton, Boolean warnIfBoneButNotChild)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.ProcessNode(ValueType parentAbsoluteTransform, NodeContent potentialParent, KFbxNode* fbxNode, Boolean partOfMainSkeleton, Boolean warnIfBoneButNotChild)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.Import(String filename, ContentImporterContext context)    at Microsoft.Xna.Framework.Content.Pipeline.ContentImporter`1.Microsoft.Xna.Framework.Content.Pipeline.IContentImporter.Import(String filename, ContentImporterContext context)    //additional calls here …   My desktop PC have 8Gb RAM, and Visual Studio’s process devenv.exe use under 2Gb of it while build process (about 3.5-4Gb of RAM is always free). It’s obvious, that VS can’t address more than 2Gb of RAM, and when that limit is over, build process is fail. OS on my PC is Win x64,  so I “charge” devenv.exe by using editbin.exe utility – in the VS Command prompt I run following: editbin "C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\devenv.exe" /LARGEADDRESSAWARE This command edits the image to indicate that the application can handle addresses larger than 2 gigabytes. After that FBX file successfully built! Of course, you must put proper path to devenv.exe, depend on your installation path. If you are on Win x86, you need to do additional action – more info here.   P.S.: although now you can build a bigger files, than usual, keep in mind, that XNA have some restrictions on vertex buffer size etc., depend on your current XNA project profile (Reach or HiDef). And if your model’s vertexbuffer size more than 64Mb (with Reach profile), that model can’t be built and raise an error.

    Read the article

  • CPU Wars Is a Trump-Style Card Game Driven by Chip Stats

    - by Jason Fitzpatrick
    If you’re looking for the geekiest card game around, you’d be hard pressed to beat CPU Wars–a top-trumps card game built around CPU specs. From the game’s designers: CPU Wars is a trump card game built by geeks for geeks. For Volume 1.0 we chose 30 CPUs that we believe had the greatest impact on the desktop history. The game is ideally played by 2 or 3 people. The deck is split between the players and then each player takes a turn and picks a category that they think has the best value. We have chosen the most important specs that could be numerically represented, such as maximum speed achieved and maximum number of transistors. It’s lots of fun, it has a bit of strategy and can be played during a break or over a coffee. If you’re interested, you can pick up a copy for £7.99 (roughly $12.50 USD). Hit up the link below for more information. How To Customize Your Wallpaper with Google Image Searches, RSS Feeds, and More 47 Keyboard Shortcuts That Work in All Web Browsers How To Hide Passwords in an Encrypted Drive Even the FBI Can’t Get Into

    Read the article

  • CodePlex Daily Summary for Monday, June 24, 2013

    CodePlex Daily Summary for Monday, June 24, 2013Popular ReleasesVG-Ripper & PG-Ripper: VG-Ripper 2.9.43: changes NEW: Added Support for "ImageJumbo.com" links NEW: Added Support for "ImgTiger.com" links NEW: Added Support for "ImageDax.net" links NEW: Added Support for "HosterBin.com" links FIXED: "ImgServe.net" linksWPF Composites: Version 4.3.0: In this Beta release, I broke my code out into two separate projects. There is a core FasterWPF.dll with the minimal required functionality. This can run with only the Aero.dll and the Rx .dll's. Then, I have a FasterWPFExtras .dll that requires and supports the Extended WPF Toolkit™ Community Edition V 1.9.0 (including Xceed DataGrid) and the Thriple .dll. This is for developers who want more . . . Finally, you may notice the other OPTIONAL .dll's available in the download such as the Dyn...Windows.Forms.Controls Revisited (SSTA.WinForms): SSTA.WinForms 1.0.0: Latest stable releaseAscend 3D: Ascend 2.0: Release notes: Implemented bone/armature animation Refactored class hierarchy and naming Addressed high CPU usage issue during animation Updated the Blender exporter and Ascend model format (now XML) Created AscendViewer, a tool for viewing Ascend modelsIndent Guides for Visual Studio: Indent Guides v13: ImportantThis release does not support Visual Studio 2010. The latest stable release for VS 2010 is v12.1. Version History Changed in v13 Added page width guide lines Added guide highlighting options Fixed guides appearing over collapsed blocks Fixed guides not appearing in newly opened files Fixed some potential crashes Fixed lines going through pragma statements Various updates for VS 2012 and VS 2013 Removed VS 2010 support Changed in v12.1: Fixed crash when unable to start...Fluent Ribbon Control Suite: Fluent Ribbon Control Suite 2.1.0 - Prerelease d: Fluent Ribbon Control Suite 2.1.0 - Prerelease d(supports .NET 3.5, 4.0 and 4.5) Includes: Fluent.dll (with .pdb and .xml) Showcase Application Samples (not for .NET 3.5) Foundation (Tabs, Groups, Contextual Tabs, Quick Access Toolbar, Backstage) Resizing (ribbon reducing & enlarging principles) Galleries (Gallery in ContextMenu, InRibbonGallery) MVVM (shows how to use this library with Model-View-ViewModel pattern) KeyTips ScreenTips Toolbars ColorGallery *Walkthrough (do...Magick.NET: Magick.NET 6.8.5.1001: Magick.NET compiled against ImageMagick 6.8.5.10. Breaking changes: - MagickNET.Initialize has been made obsolete because the ImageMagick files in the directory are no longer necessary. - MagickGeometry is no longer IDisposable. - Renamed dll's so they include the platform name. - Image profiles can now only be accessed and modified with ImageProfile classes. - Renamed DrawableBase to Drawable. - Removed Args part of PathArc/PathCurvetoArgs/PathQuadraticCurvetoArgs classes. The...Three-Dimensional Maneuver Gear for Minecraft: TDMG 1.1.0.0 for 1.5.2: CodePlex???(????????) ?????????(???1/4) ??????????? ?????????? ???????????(??????????) ??????????????????????? ↑????、?????????????????????(???????) ???、??????????、?????????????????????、????????1.5?????????? Shift+W(????)??????????????????10°、?10°(?????????)???Hyper-V Management Pack Extensions 2012: HyperVMPE2012 (v1.0.1.126): Hyper-V Management Pack Extensions 2012 Beta ReleaseOutlook 2013 Add-In: Email appointments: This new version includes the following changes: - Ability to drag emails to the calendar to create appointments. Will gather all the recipients from all the emails and create an appointment on the day you drop the emails, with the text and subject of the last selected email (if more than one selected). - Increased maximum of numbers to display appointments to 30. You will have to uninstall the previous version (add/remove programs) if you had installed it before. Before unzipping the file...Caliburn Micro: WPF, Silverlight, WP7 and WinRT/Metro made easy.: Caliburn.Micro v1.5.2: v1.5.2 - This is a service release. We've fixed a number of issues with Tasks and IoC. We've made some consistency improvements across platforms and fixed a number of minor bugs. See changes.txt for details. Packages Available on Nuget Caliburn.Micro – The full framework compiled into an assembly. Caliburn.Micro.Start - Includes Caliburn.Micro plus a starting bootstrapper, view model and view. Caliburn.Micro.Container – The Caliburn.Micro inversion of control container (IoC); source code...SQL Compact Query Analyzer: 1.0.1.1511: Beta build of SQL Compact Query Analyzer Bug fixes: - Resolved issue where the application crashes when loading a database that contains tables without a primary key Features: - Displays database information (database version, filename, size, creation date) - Displays schema summary (number of tables, columns, primary keys, identity fields, nullable fields) - Displays the information schema views - Displays column information (database type, clr type, max length, allows null, etc) - Support...CODE Framework: 4.0.30618.0: See change notes in the documentation section for details on what's new. Note: If you download the class reference help file with, you have to right-click the file, pick "Properties", and then unblock the file, as many browsers flag the file as blocked during download (for security reasons) and thus hides all content.Toolbox for Dynamics CRM 2011: XrmToolBox (v1.2013.6.18): XrmToolbox improvement Use new connection controls (use of Microsoft.Xrm.Client.dll) New display capabilities for tools (size, image and colors) Added prerequisites check Added Most Used Tools feature Tools improvementNew toolSolution Transfer Tool (v1.0.0.0) developed by DamSim Updated toolView Layout Replicator (v1.2013.6.17) Double click on source view to display its layoutXml All tools list Access Checker (v1.2013.6.17) Attribute Bulk Updater (v1.2013.6.18) FetchXml Tester (v1.2013.6.1...Media Companion: Media Companion MC3.570b: New* Movie - using XBMC TMDB - now renames movies if option selected. * Movie - using Xbmc Tmdb - Actor images saved from TMDb if option selected. Fixed* Movie - Checks for poster.jpg against missing poster filter * Movie - Fixed continual scraping of vob movie file (not DVD structure) * Both - Correctly display audio channels * Both - Correctly populate audio info in nfo's if multiple audio tracks. * Both - added icons and checked for DTS ES and Dolby TrueHD audio tracks. * Both - Stream d...Document.Editor: 2013.24: What's new for Document.Editor 2013.24: Improved Video Editing support Improved Link Editing support Minor Bug Fix's, improvements and speed upsExtJS based ASP.NET Controls: FineUI v3.3.0: ??FineUI ?? ExtJS ??? ASP.NET ???。 FineUI??? ?? No JavaScript,No CSS,No UpdatePanel,No ViewState,No WebServices ???????。 ?????? IE 7.0、Firefox 3.6、Chrome 3.0、Opera 10.5、Safari 3.0+ ???? Apache License v2.0 ?:ExtJS ?? GPL v3 ?????(http://www.sencha.com/license)。 ???? ??:http://fineui.com/bbs/ ??:http://fineui.com/demo/ ??:http://fineui.com/doc/ ??:http://fineui.codeplex.com/ FineUI???? ExtJS ?????????,???? ExtJS ?。 ????? FineUI ? ExtJS ?:http://fineui.com/bbs/forum.php?mod=viewthrea...BarbaTunnel: BarbaTunnel 8.0: Check Version History for more information about this release.ExpressProfiler: ExpressProfiler v1.5: [+] added Start time, End time event columns [+] added SP:StmtStarting, SP:StmtCompleted events [*] fixed bug with Audit:Logout eventpatterns & practices: Data Access Guidance: Data Access Guidance Drop4 2013.06.17: Drop 4New ProjectsActivity Injector: Activity Injector is mainly used in unit testing workflow activities.Application Starter Tremens: Application for starting ApplicationaAudiwikiREF: This project is created to finish my referral assignment because i faced many problems with the previous one where i couldn't connect the project and commit. AVTS GLUCK ANTIVIRUS SYSTEM: AVTS Gluck AntiVirus System - the best antivirus made by young developers from Ukraine. It was writed in C#, C++, VB.NET, Excutable files, PHP DS, .and more....Base64 File Converter: File converting tool that converts any file into Base64 TXT file and vice versa just by drag-and-drop gesture.Basket Builders Umbraco bootstrap packages: Our umbraco packages to speed up project Bugzy Bug Tracking and Work Management: Creating Bug Tracking and Work Managment system while helping the JSON-RPC standard become as strong as the Bloated XML-RPC(SOAP)Dalmatian Build Script: Dalmatian allows you to automate your build using C# of VB.NETDécouverte Majeure MISN Chrono-courses: Chronocourses projectDocToolTip Library: DocToolTip library allows creating screenshots of winform forms with the name and type of component controls.DotaPick: SummaryHiUpdateTools - easy publish and update your app: HiUpdateTools is a easy tools to use publish new version of your application Kinect Skeleton Recorder: Records and shows XBox 360 Kinect skeleton data using windows forms. Saves the data as XML that can be read by other applications for use in animation.MARIT (BETA): MARIT is a simple drawing app for Android. More features are on the todo-list so stay tuned! Feedback and requests are appreciated. MvcToolbox: MVC Toolbox is a library which contains lots of usefull methods which will help developer to write less code and to be more productive.PrototypeRef: list of testing toolsSmartPlay: SmartPlaySolid Geometry Simulation: Implements a simple fur/hair dynamic engine using physBAM. This package includes a semi-implicit solver for hair simulation and plugins for maya/houdini.Windows.Forms.Controls Revisited (SSTA.WinForms): SearchableListBox control built on the Window,Form.ListBox renders a list box items with multiple search terms highlighted.YiLeSystem: This is a dining system.

    Read the article

  • Apple Automator "New PDF from Images" maintaining same filename

    - by mech
    I will potentially have 26k of old legacy PICT images to transfer first to PDF for migration. I am using Apple Automator and also the "Dispense Items Incrementally" to loop through it. However, I can't seem to let "New PDF from Images" to remember the original filename. Anyone able to offer some advice :) FYI, I am transforming it to PDF because I can't do it using ImageMagick to convert directly to my ultimate JPEG format. Due to the fact that my PICT was created very long ago and thus has some convert: improper image header error. See this ticket for more information. Thus I am doing a intermediate convert PICT to PDF first, then convert that PDF to JPEG :) The only thing left is the naming of the "Output File Name" which do not allow me to identify original filename. See the screen here:

    Read the article

  • Is it easy to update ubuntu beta to ubuntu final release?

    - by Peter Smit
    At this moment I am preparing a virtual server to host a web application which needs php5.3 The virtual server base image is always Hardy (8.04 LTS). There is no php5.3 until the upcoming release in a few days: Lucid (9.04 LTS). I am seeing to options: - waiting until the final version is released and then start preparing this server - Now upgrading to the beta (do-release-upgrade --devel-release) and when the final release has come upgrading to that For time constraints I would prefer the second option. I only can't find whether it will be easy to upgrade from a beta to the 'clean' final release. Is this possible in an easy way. Will it have any drawback for security or will there be any traces left of it being ever a beta release? Note: the server will not go into production before the LTS is really installed.

    Read the article

  • Server not sending a SYN/ACK packet in response to a SYN packet

    - by jeff
    Using iptraf, tcpdump and wireshark I can see a SYN packet coming in but only the ACK FLAG is set in reply packet. I'm running Debian 5 with kernel 2.6.36 I've turned off window_scaling and tcp_timestamps, tcp_tw_recycle and tcp_tw_reuse: cat /etc/sysctl.conf net.ipv4.tcp_tw_recycle = 0 net.ipv4.tcp_tw_reuse = 0 net.ipv4.tcp_window_scaling = 0 net.ipv4.tcp_timestamps = 0 I've attached an image of the wireshark output. http://imgur.com/pECG0.png Output to netstat netstat -natu | grep '72.23.130.104' tcp 0 0 97.107.134.212:18000 72.23.130.104:42905 SYN_RECV I've been doing everything possible to find a solution and have yet to figure out the problem, so any help/suggestions are much appreciated. UPDATE 1: I've set tcp_syncookies = 0 and noticed I am now replying with 1 SYN+ACK for every 50 SYN requests. The host trying to connect is sending a SYN request about once every second. PCAP FILE

    Read the article

  • Reading the *.sqlite files stored in the firefox profile.

    - by celil
    How would one go about reading the data in the *.sqlite files in one's profile? Trying to read them with sqlite3 was unsuccessful. $ sqlite3 SQLite version 3.7.4 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite> .load ./downloads.sqlite Error: dlopen(./downloads.sqlite, 10): no suitable image found. Did find: ./downloads.sqlite: unknown file type, first eight bytes: 0x53 0x51 0x4C 0x69 0x74 0x65 0x20 0x66 sqlite> .exit Is this due to Firefox using an older version of sqlite? I am using Firefox v3.6.13

    Read the article

  • What could cause Vista machines to degrade in performance?

    - by gencha
    We have installed several machines running Windows Vista at a client site. These machines run the same set of applications each day and are not operated with by any user. They mainly run a 3D application for digital signage purposes. After a few months I noticed that several machines suffer from really bad stuttering. The performance is unacceptable. At first I figured it was faulty hardware. But after putting a fresh image on them, they run perfectly fine again. These are Core i7 machines with GTX 280 graphics adapters. They should be able to handle the workload without any trouble (and they do, at least for a while). I really have no idea how to track down the source of this issue. Especially since not all of the machines display the same issue.

    Read the article

  • Uninstall Dell Wave

    - by Onion-Knight
    The image we put on our company laptops includes the Dell Wave interface for Biometric log in. The Wave UI increases boot time by about 5 minutes (because it loads the fingerprint database(a feature I don't use)), so I'm trying to uninstall it, but with little success. There is no line-item in the Add/Remove Programs menu to formally delete it, nor is there a Service I can stop/remove to disable the Wave UI. I've tried looking online, but all I find instead are hits for Google Wave and virus-removal forums with HyjackThis dumps that include Dell Wave records. Any ideas?

    Read the article

  • difference between Casini [IIS express] and VS Development server or Expression web

    - by anirudha
    MVC3 project can be run within Expression web and Visual studio as opened like a website not a project. they work same even if you open blogengine.net project in VS they take a time when you have more theme but expression web debug them in a second. well because theme design not matter for code. Expression web is a good because they save time for compile the code. even changes we make a little in design nothing in backend code.   i found a little difference between Casini and VS development server that if image putted in wrong way like <img src=”//img.png”/> instead of <img src=”/img.png”/> the error we make // instead of / that’s not worked in Expression web or Visual studio debugging but in Cassini it’s work fine.   Well i found that debug Blogengine.net in Expression web is a great thing because in VS they take a time like a minute to debug when you trying to debug first time. Expression Web save a time when we design themes within them and that’s much good option because web is also maked for design.   Well if you want to debug application faster then use casini but Expression web debugging is a good option when they take a long time to debug in Visual studio and EW debug them in a seconds.

    Read the article

  • Portal and Content - Components, part 3 – Applied Customization Framework (4 of 7)

    - by Stefan Krantz
    Have you ever been challenged with the situation where your work task asks you to implement functionality in the WebCenter Portal and you browse through the Resource Catalog (Business Dictionary) and find the functionality you need. However when you get started there is small short comings and you ask your self- how can I re-use what is out of the box ca?- I wonder what code I need to use to produce the similar functions and include my new requirements?- Must I write a new taskflow? The answer to above questions are in many times answered with simply you can  do a taskflow customization to out-of-the-box taskflows. In this post I will help you understand how to do such customization. Best described is a 4 step process, see image flow below for illustration: Just to clarify few naming confusions that might occur when go through above process. Customization Role is a function within JDeveloper that will allow you to implement view and flow customizations to existing taskflows WebCenter Portal – Spaces Taskflow Customization Framework this technology scope do not only refer to WebCenter Spaces, this also include WebCenter Portal/Framework A taskflow customization do not overwrite or replace any code, it just creates an additional tip view of the taskflow in the MDS for the current application (WebCenter Portal or WebCenter Spaces) To sum up this simple procedure I also like to help you find your way around the main topic for this post series, this post series is focusing primarily on Content integration with WebCenter Portal, so where can I find content related taskflows in the WebCenter Libraries. The list below mention some useful locations to taskflows and each taskflow page fragments. Library Reference - WebCenter Document Library Service View Content Presenter Path: oracle.webcenter.doclib.view.jsf.taskflows.presenterTaskflow: contentPresenter.xml - The Content Presenter taskflowTaskflow: contentPresenterWizard.xml - The publishing wizard to select content, select template and preview including contributionDocument Manager Path: oracle.webcenter.doclib.view.jsf.taskflows.docManager Taskflow: documentManager.xml - The Document Manager taskflow which includes references to document management feature including browsing, download, uploading and viewing. For more information on Taskflow customizations please see following documentation:http://docs.oracle.com/cd/E23943_01/webcenter.1111/e10148/jpsdg_taskflows.htm#BACIEGJD

    Read the article

  • Ugly/Inconsistent Theming in Ubuntu Gnome 12.10

    - by Erland
    Some applications are displaying really ugly widgets and menus. I think it's a GTK issue and perhaps more particularly, only applies to GTK2 apps but I'm not sure. The numerous questions on here that deal with GTK2 v GTK3 themes do not answer my problem. Here is my situation: I'm using Ubuntu Gnome with Gnome Shell (installed using the "upgrade" instructions, rather than fresh install) with the default Adwaita theme The reason I did an upgrade instead of fresh install is because I'm on a Macbook Air and there is no mac image/iso for Ubuntu Gnome Previously, I did a fresh install of Ubuntu Gnome 12.10 and had no theming problems Now, apps like nautilus, rhythmbox, brasero, even third-party ones like Lightread look exactly as expected but other apps, including Firefox, Inkscape, GIMP, Libreoffice look awful. Some examples: Firefox with ugly location bar: http://ubuntuone.com/3e2X0JTa4CT4afC4303U9c vs nautilus location bar: http://ubuntuone.com/3TbHWWuNMcJnlpI4IpjiUO GIMP file dialogue (like Windows 95!): http://ubuntuone.com/4ioCcqq3flgO7zAWgAhfWy vs the rhythmbox file dialogue (correct): http://ubuntuone.com/2xLplCOBvQnyeqdsTGdgXq Menus in Libreoffice (very bad for usability): http://ubuntuone.com/26WTaEz4PMGmiItGeSmBjZ vs menus in rhythmbox: http://ubuntuone.com/4Ib4thMLqohsle6J5KEvuI I've been searching for a solution to this problem for some time. The logical explanation is that all the GTK3 apps are working and anything that is still using GTK2 is not. If that's the case though, why did the same installation (Ubuntu Gnome 12.10) and the same theme (Adwaita) previously work with all those GTK2 apps? Desperate for help!

    Read the article

  • Computer Networks UNISA - Chap 15 &ndash; Network Management

    - by MarkPearl
    After reading this section you should be able to Understand network management and the importance of documentation, baseline measurements, policies, and regulations to assess and maintain a network’s health. Manage a network’s performance using SNMP-based network management software, system and event logs, and traffic-shaping techniques Identify the reasons for and elements of an asset managements system Plan and follow regular hardware and software maintenance routines Fundamentals of Network Management Network management refers to the assessment, monitoring, and maintenance of all aspects of a network including checking for hardware faults, ensuring high QoS, maintaining records of network assets, etc. Scope of network management differs depending on the size and requirements of the network. All sub topics of network management share the goals of enhancing the efficiency and performance while preventing costly downtime or loss. Documentation The way documentation is stored may vary, but to adequately manage a network one should at least record the following… Physical topology (types of LAN and WAN topologies – ring, star, hybrid) Access method (does it use Ethernet 802.3, token ring, etc.) Protocols Devices (Switches, routers, etc) Operating Systems Applications Configurations (What version of operating system and config files for serve / client software) Baseline Measurements A baseline is a report of the network’s current state of operation. Baseline measurements might include the utilization rate for your network backbone, number of users logged on per day, etc. Baseline measurements allow you to compare future performance increases or decreases caused by network changes or events with past network performance. Obtaining baseline measurements is the only way to know for certain whether a pattern of usage has changed, or whether a network upgrade has made a difference. There are various tools available for measuring baseline performance on a network. Policies, Procedures, and Regulations Following rules helps limit chaos, confusion, and possibly downtime. The following policies and procedures and regulations make for sound network management. Media installations and management (includes designing physical layout of cable, etc.) Network addressing policies (includes choosing and applying a an addressing scheme) Resource sharing and naming conventions (includes rules for logon ID’s) Security related policies Troubleshooting procedures Backup and disaster recovery procedures In addition to internal policies, a network manager must consider external regulatory rules. Fault and Performance Management After documenting every aspect of your network and following policies and best practices, you are ready to asses you networks status on an on going basis. This process includes both performance management and fault management. Network Management Software To accomplish both fault and performance management, organizations often use enterprise-wide network management software. There various software packages that do this, each collect data from multiple networked devices at regular intervals, in a process called polling. Each managed device runs a network management agent. So as not to affect the performance of a device while collecting information, agents do not demand significant processing resources. The definition of a managed devices and their data are collected in a MIB (Management Information Base). Agents communicate information about managed devices via any of several application layer protocols. On modern networks most agents use SNMP which is part of the TCP/IP suite and typically runs over UDP on port 161. Because of the flexibility and sophisticated network management applications are a challenge to configure and fine-tune. One needs to be careful to only collect relevant information and not cause performance issues (i.e. pinging a device every 5 seconds can be a problem with thousands of devices). MRTG (Multi Router Traffic Grapher) is a simple command line utility that uses SNMP to poll devices and collects data in a log file. MRTG can be used with Windows, UNIX and Linux. System and Event Logs Virtually every condition recognized by an operating system can be recorded. This is typically done using event logs. In Windows there is a GUI event log viewer. Similar information is recorded in UNIX and Linux in a system log. Much of the information collected in event logs and syslog files does not point to a problem, even if it is marked with a warning so it is important to filter your logs appropriately to reduce the noise. Traffic Shaping When a network must handle high volumes of network traffic, users benefit from performance management technique called traffic shaping. Traffic shaping involves manipulating certain characteristics of packets, data streams, or connections to manage the type and amount of traffic traversing a network or interface at any moment. Its goals are to assure timely delivery of the most important traffic while offering the best possible performance for all users. Several types of traffic prioritization exist including prioritizing traffic according to any of the following characteristics… Protocol IP address User group DiffServr VLAN tag in a Data Link layer frame Service or application Caching In addition to traffic shaping, a network or host might use caching to improve performance. Caching is the local storage of frequently needed files that would otherwise be obtained from an external source. By keeping files close to the requester, caching allows the user to access those files quickly. The most common type of caching is Web caching, in which Web pages are stored locally. To an ISP, caching is much more than just convenience. It prevents a significant volume of WAN traffic, thus improving performance and saving money. Asset Management Another key component in managing networks is identifying and tracking its hardware. This is called asset management. The first step to asset management is to take an inventory of each node on the network. You will also want to keep records of every piece of software purchased by your organization. Asset management simplifies maintaining and upgrading the network chiefly because you know what the system includes. In addition, asset management provides network administrators with information about the costs and benefits of certain types of hardware or software. Change Management Networks are always in a stage of flux with various aspects including… Software changes and patches Client Upgrades Shared Application Upgrades NOS Upgrades Hardware and Physical Plant Changes Cabling Upgrades Backbone Upgrades For a detailed explanation on each of these read the textbook (Page 750 – 761)

    Read the article

  • How can create the smallest possible mirror of the archive?

    - by Registered User
    I need to create an http url at my laptop to have a Ubuntu installation begin within my laptop on a Xen environment. This is how the final thing will look like. The host and client are both going to be my laptop, I Googled and came across apt-mirror and some other packages. I do not want to archive entire 15 GB Ubuntu repositories on my machine. It is not possible to use a CD,ISO,loop mounted disk (reason mentioned below). I have tried using netboot image on local machine which failed because if you are attempting to create a virtual machine on a hardware which does not support VT virt-manager installer necessarily needs a URL of this sort http://archive.ubuntu.com/ubuntu/dists/hardy/main/installer-i386/current/images/netboot/ Any other option to create guest OS is simply grayed out. The unfortunate part is my Ethernet connections do not work when I boot with Xen-4.0 and a pv-ops Dom0 kernel from Jeremy's tree. Which is where I have to do this work. So I have to create a URL structure which is similar to Ubuntu mirrors. So how can I do this in bare minimum so that at least the console boots and once the console comes I can do some work.

    Read the article

  • RTFMobile

    - by ultan o'broin
    It may seem obvious but it’s worth stating again. The idea that mobile users are going to read lots of user assistance on their devices is just wrong. So, Jakob Nielsen’s post Mobile Content Is Twice as Difficult serves as a timely reminder for anyone thinking of putting manuals as a form of user assistance onto mobile phones. There is also an excellent post on UXMag.com, explaining that one of the ways to screw up with your iPhone app is to throw an old-style user manual into the user experience: 10 Surefire Ways to Screw Up Your iPhone App.   (Image copyright and referenced from UX Magazine 2010)   Instead, user assistance  alternatives—if any at all—include one-time tours, graphics, in-context instructions, and so on. Not so sure that importing “humor” and “personality” work so well in the enterprise app space, myself. However, the message is clear: iPhone users don’t read manuals. Great message. Users will figure it out, and if they can’t, well then your app’s UX is a problem and the app will fail. Shame some teams are obsessed with figuring out ways to port existing manuals to mobile platforms without any thought for the UX. Razorfish’s Scatter/Gather blog says it all: One thing that is particularly discouraging, most material currently available on “Creating Content for the iPad” or similar themes turns out to be about getting traditional content onto, or into, the iPad. Now, manuals for non-end users in PDF format on eReaders is a different matter. I have research on that, but it’s for another post. Technorati Tags: mobile,user assistance,UX,user experience,manuals,documentation

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part II

    - by dbayard
    Part II – Solving Big Problems with Oracle R Enterprise In the first post in this series (see https://blogs.oracle.com/R/entry/solving_big_problems_with_oracle), we showed how you can use R to perform historical rate of return calculations against investment data sourced from a spreadsheet.  We demonstrated the calculations against sample data for a small set of accounts.  While this worked fine, in the real-world the problem is much bigger because the amount of data is much bigger.  So much bigger that our approach in the previous post won’t scale to meet the real-world needs. From our previous post, here are the challenges we need to conquer: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In this post, we will show how we moved from sample data environment to working with full-scale data.  This post is based on actual work we did for a financial services customer during a recent proof-of-concept. Getting started with the Database At this point, we have some sample data and our IRR function.  We were at a similar point in our customer proof-of-concept exercise- we had sample data but we did not have the full customer data yet.  So our database was empty.  But, this was easily rectified by leveraging the transparency features of Oracle R Enterprise (see https://blogs.oracle.com/R/entry/analyzing_big_data_using_the).  The following code shows how we took our sample data SimpleMWRRData and easily turned it into a new Oracle database table called IRR_DATA via ore.create().  The code also shows how we can access the database table IRR_DATA as if it was a normal R data.frame named IRR_DATA. If we go to sql*plus, we can also check out our new IRR_DATA table: At this point, we now have our sample data loaded in the database as a normal Oracle table called IRR_DATA.  So, we now proceeded to test our R function working with database data. As our first test, we retrieved the data from a single account from the IRR_DATA table, pull it into local R memory, then call our IRR function.  This worked.  No SQL coding required! Going from Crawling to Walking Now that we have shown using our R code with database-resident data for a single account, we wanted to experiment with doing this for multiple accounts.  In other words, we wanted to implement the split-apply-combine technique we discussed in our first post in this series.  Fortunately, Oracle R Enterprise provides a very scalable way to do this with a function called ore.groupApply().  You can read more about ore.groupApply() here: https://blogs.oracle.com/R/entry/analyzing_big_data_using_the1 Here is an example of how we ask ORE to take our IRR_DATA table in the database, split it by the ACCOUNT column, apply a function that calls our SimpleMWRR() calculation, and then combine the results. (If you are following along at home, be sure to have installed our myIRR package on your database server via  “R CMD INSTALL myIRR”). The interesting thing about ore.groupApply is that the calculation is not actually performed in my desktop R environment from which I am running.  What actually happens is that ore.groupApply uses the Oracle database to perform the work.  And the Oracle database is what actually splits the IRR_DATA table by ACCOUNT.  Then the Oracle database takes the data for each account and sends it to an embedded R engine running on the database server to apply our R function.  Then the Oracle database combines all the individual results from the calls to the R function. This is significant because now the embedded R engine only needs to deal with the data for a single account at a time.  Regardless of whether we have 20 accounts or 1 million accounts or more, the R engine that performs the calculation does not care.  Given that normal R has a finite amount of memory to hold data, the ore.groupApply approach overcomes the R memory scalability problem since we only need to fit the data from a single account in R memory (not all of the data for all of the accounts). Additionally, the IRR_DATA does not need to be sent from the database to my desktop R program.  Even though I am invoking ore.groupApply from my desktop R program, because the actual SimpleMWRR calculation is run by the embedded R engine on the database server, the IRR_DATA does not need to leave the database server- this is both a performance benefit because network transmission of large amounts of data take time and a security benefit because it is harder to protect private data once you start shipping around your intranet. Another benefit, which we will discuss in a few paragraphs, is the ability to leverage Oracle database parallelism to run these calculations for dozens of accounts at once. From Walking to Running ore.groupApply is rather nice, but it still has the drawback that I run this from a desktop R instance.  This is not ideal for integrating into typical operational processes like nightly data warehouse refreshes or monthly statement generation.  But, this is not an issue for ORE.  Oracle R Enterprise lets us run this from the database using regular SQL, which is easily integrated into standard operations.  That is extremely exciting and the way we actually did these calculations in the customer proof. As part of Oracle R Enterprise, it provides a SQL equivalent to ore.groupApply which it refers to as “rqGroupEval”.  To use rqGroupEval via SQL, there is a bit of simple setup needed.  Basically, the Oracle Database needs to know the structure of the input table and the grouping column, which we are able to define using the database’s pipeline table function mechanisms. Here is the setup script: At this point, our initial setup of rqGroupEval is done for the IRR_DATA table.  The next step is to define our R function to the database.  We do that via a call to ORE’s rqScriptCreate. Now we can test it.  The SQL you use to run rqGroupEval uses the Oracle database pipeline table function syntax.  The first argument to irr_dataGroupEval is a cursor defining our input.  You can add additional where clauses and subqueries to this cursor as appropriate.  The second argument is any additional inputs to the R function.  The third argument is the text of a dummy select statement.  The dummy select statement is used by the database to identify the columns and datatypes to expect the R function to return.  The fourth argument is the column of the input table to split/group by.  The final argument is the name of the R function as you defined it when you called rqScriptCreate(). The Real-World Results In our real customer proof-of-concept, we had more sophisticated calculation requirements than shown in this simplified blog example.  For instance, we had to perform the rate of return calculations for 5 separate time periods, so the R code was enhanced to do so.  In addition, some accounts needed a time-weighted rate of return to be calculated, so we extended our approach and added an R function to do that.  And finally, there were also a few more real-world data irregularities that we needed to account for, so we added logic to our R functions to deal with those exceptions.  For the full-scale customer test, we loaded the customer data onto a Half-Rack Exadata X2-2 Database Machine.  As our half-rack had 48 physical cores (and 96 threads if you consider hyperthreading), we wanted to take advantage of that CPU horsepower to speed up our calculations.  To do so with ORE, it is as simple as leveraging the Oracle Database Parallel Query features.  Let’s look at the SQL used in the customer proof: Notice that we use a parallel hint on the cursor that is the input to our rqGroupEval function.  That is all we need to do to enable Oracle to use parallel R engines. Here are a few screenshots of what this SQL looked like in the Real-Time SQL Monitor when we ran this during the proof of concept (hint: you might need to right-click on these images to be able to view the images full-screen to see the entire image): From the above, you can notice a few things (numbers 1 thru 5 below correspond with highlighted numbers on the images above.  You may need to right click on the above images and view the images full-screen to see the entire image): The SQL completed in 110 seconds (1.8minutes) We calculated rate of returns for 5 time periods for each of 911k accounts (the number of actual rows returned by the IRRSTAGEGROUPEVAL operation) We accessed 103m rows of detailed cash flow/market value data (the number of actual rows returned by the IRR_STAGE2 operation) We ran with 72 degrees of parallelism spread across 4 database servers Most of our 110seconds was spent in the “External Procedure call” event On average, we performed 8,200 executions of our R function per second (110s/911k accounts) On average, each execution was passed 110 rows of data (103m detail rows/911k accounts) On average, we did 41,000 single time period rate of return calculations per second (each of the 8,200 executions of our R function did rate of return calculations for 5 time periods) On average, we processed over 900,000 rows of database data in R per second (103m detail rows/110s) R + Oracle R Enterprise: Best of R + Best of Oracle Database This blog post series started by describing a real customer problem: how to perform a lot of calculations on a lot of data in a short period of time.  While standard R proved to be a very good fit for writing the necessary calculations, the challenge of working with a lot of data in a short period of time remained. This blog post series showed how Oracle R Enterprise enables R to be used in conjunction with the Oracle Database to overcome the data volume and performance issues (as well as simplifying the operations and security issues).  It also showed that we could calculate 5 time periods of rate of returns for almost a million individual accounts in less than 2 minutes. In a future post, we will take the same R function and show how Oracle R Connector for Hadoop can be used in the Hadoop world.  In that next post, instead of having our data in an Oracle database, our data will live in Hadoop and we will how to use the Oracle R Connector for Hadoop and other Oracle Big Data Connectors to move data between Hadoop, R, and the Oracle Database easily.

    Read the article

  • Clonezilla , NLite and PCs with different hardware specs..

    - by r2b2
    Hi, I used Clonezilla to create and restore images from a master computer to other workstations with the same specs. But the problem now is there are new computers whose hardware specs are different than the ones I maintained (mostly the videocard is different). Is it possible to create a customize Windows XP installer using Nlite and integrate all potential videocard and motherboard drivers ? If I then used this NLite ISO to install to a master computer which i will later clone and restore the image to other workstations, will windows xp still pick up the correct driver set? During XP installation, does the installer transfers all the drivers to the computer's harddisk? Thanks! Ryan

    Read the article

  • Problems connecting Centos to the network when running in VMWare

    - by Sakin
    Hi, I installed CentOs on VMware running on windows XP. When trying to configure it to connect to the internet, I get an error message when trying to bring up the network interface: [root@VMLinux ~]# /et/init.d/network start Bringing up loopback interface: [ OK ] Bringing up interface eth0: Determining IP information for eth0... failed [FAILED] VM is running on a machine that has access to the network, I tried it on two different networks that have DHCP enabled. I tried to configure VMWare to use a bridged connection as well as NAT connection. An image of Ubuntu runs fine on the same VMware. What am I missing here? Thanks.

    Read the article

< Previous Page | 609 610 611 612 613 614 615 616 617 618 619 620  | Next Page >