Search Results

Search found 22405 results on 897 pages for 'message threading'.

Page 810/897 | < Previous Page | 806 807 808 809 810 811 812 813 814 815 816 817  | Next Page >

  • Instant Rename and Rename Refactoring

    - by Petr
    During the last weeks I have got  a few questions about rename refactoring and some users also complain to me that the refactoring in NetBeans 6.x was much faster. So I would like to explain the situation. For some people, who don't know, Instant Rename action and Rename Refactoring  can look like one action. But it's not true, even if  both actions use the same shortcut (CTRL + R). NetBeans 6.x contained only Instant Rename action (speaking about PHP support), which we can mark as very simple rename refactoring through one file. From NetBeans 7.0 the Instant Rename action works only in "non public" context. It means that this action is used for fast renaming variables that has local context like inside a method, or for renaming private methods and fields that can not be used outside of the scope, where they are declared. From user point of view these two action can be simply recognized. When is after CTRL+R called Instant Rename action, then the identifier is surrounded with rectangle and you can rename it directly in the file. It's fast and simple, also the usages of this identifier are renamed in the same time as you write. The picture below shows Instant Rename action for $message identifier, that is visible only in the print_test method and due this after CTRL+R is called Instant Rename. In NetBeans 7.0, there was added Rename Refactoring that is called for public identifiers. It means for identifiers that could be used in other files. If you press CTRL+R shortcut when the caret is inside $hello identifier from the picture above, NetBeans recognizes that $hello is declared / used in a global context and calls the Rename Refactoring that brings a dialog to change the name of the identifier. From this dialog you have to preview suggested changes, through pressing Preview button and then execute the refactoring through Do Refactoring button. Yes, it's more complicated from user point of view than Instant Rename, but in Rename Refactoring NetBeans can change more files at once. It should be  the developer responsibility to decide whether the suggested changes are right and the refactoring can be executed or in some files original name should be kept. Someone can argue that he doesn't use $hello variable in any other file so Instant Rename could be used in such case. Yes it's true, but in such case NetBeans has to know all usages of all identifiers and keep this informations up to date during editing a file. I'm sure that this is not possible due to the performance problems, mainly for big projects. So the usages are computed after pressing the Preview button. And why is the Refactor button always disabled in the Rename dialog and user has to always go through the preview phase? NetBeans has API and SPI for implementing refactoring actions and this dialog is a part of this infrastructure. If you rename an identifier for example in Java, the Refactor buttons is enabled, but Java is strongly type language and you can be almost in 99% sure that the IDE will suggest the right results. In PHP as a dynamic language, we can not be sure, what NetBeans finds is only a "guess". This is why NetBeans pushes developers to preview the changes for PHP rename. I hope that I have explain it clearly. I'm open to any discussion. What I have described above is situation in NetBeans 7.0, 7.0.1 and probably it will be also in NetBeans 7.1, because there is no plan to change it. Please write your opinion here.

    Read the article

  • Don&rsquo;t Kill the Password

    - by Anthony Trudeau
    A week ago Mr. Honan from Wired.com penned an article on security he titled “Kill the Password: Why a String of Characters Can’t Protect Us Anymore.” He asserts that the password is not effective and a new solution is needed. Unfortunately, Mr. Honan was a victim of hacking. As a result he has a victim’s vendetta. His conclusion is ill conceived even though there are smatterings of truth and good advice. The password is a security barrier much like a lock on your door. In of itself it’s not guaranteeing protection. You can have a good password akin to a steel reinforced door with the best lock money can buy, or you can have a poor password like “password” which is like a sliding lock like on a bathroom stall. But, just like in the real world a lock isn’t always enough. You can have a lock, security system, video cameras, guard dogs, and even armed security guards; but none of that guarantees your protection. Even top secret government agencies can be breached by someone who is just that good (as dramatized in movies like Mission Impossible). And that’s the crux of it. There are real hackers out there that are that good. Killer coding ninja monkeys do exist! We still have locks on our doors, because they still serve their role. Passwords are no different. Security doesn’t end with the password. Most people would agree that stuffing your mattress with your life savings isn’t a good idea even if you have the best locks and security system. Most people agree its safest to have the money in a bank. Essentially this is compartmentalization. Compartmentalization extends to the online world as well. You’re at risk if your online banking accounts are linked to the same account as your social networks. This is especially true if you’re lackadaisical about linking those social networks to outside sources including apps. The object here is to minimize the damage that can be done. An attacker should not be able to get into your bank account, because they breached your Twitter account. It’s time to prioritize once you’ve compartmentalized. This simply means deciding how much security you want for the different compartments which I’ll call security zones. Social networking applications like Facebook provide a lot of security features. However, security features are almost always a compromise with privacy and convenience. It’s similar to an engineering adage, but in this case it’s security, convenience, and privacy – pick two. For example, you might use a safe instead of bank to store your money, because the convenience of having your money closer or the privacy of not having the bank records is more important than the added security. The following are lists of security do’s and don’ts (these aren’t meant to be exhaustive and each could be an article in of themselves): Security Do’s: Use strong passwords based on a phrase Use encryption whenever you can (e.g. HTTPS in Facebook) Use a firewall (and learn to use it properly) Configure security on your router (including port blocking) Keep your operating system patched Make routine backups of important files Realize that if you’re not paying for it, you’re the product Security Don’ts Link accounts if at all possible Reuse passwords across your security zones Use real answers for security questions (e.g. mother’s maiden name) Trust anything you download Ignore message boxes shown by your system or browser Forget to test your backups Share your primary email indiscriminately Only you can decide your comfort level between convenience, privacy, and security. Attackers are going to find exploits in software. Software is complex and depends on other software. The exploits are the responsibility of the software company. But your security is always your responsibility. Complete security is an illusion. But, there is plenty you can do to minimize the risk online just like you do in the physical world. Be safe and enjoy what the Internet has to offer. I expect passwords to be necessary just as long as locks.

    Read the article

  • Scid Chess Program Compiling issue

    - by lbochitt
    First of all I would like to point that I'm new to Ubuntu so sorry if what I am asking is ridiculous. I have downloaded Scid 4.4 chess program and I have tried to compile it as it was explained on its website: 1) Initialize git. 2) Create a folder where you want to download and (?) compile the source, then cast: git init on your command line. 3) You're now ready to download the sources recall Fulvio's spell: git clone git://scid.git.sourceforge.net/gitroot/scid/scid This should get you the latest Scid source. 4) You're now ready to compile Scid. In principle, all you need to do is: ./configure and then make 5) If you get stuck, you probably need to get developer versions of tcl/tk. This translates into issuing these three commands: sudo apt-get install tcl8.5-dev sudo apt-get install tk8.5-dev sudo apt-get install zlib1g -dev 6) You should now be ready to compile The problem is that when I run ./configure to start compiling the following message appears on Terminal: configure: Makefile configuration program for Scid Tcl/Tk version: 8.5 Your operating system is: Linux 3.8.0-19-generic Location of "tcl.h": /usr/include/tcl8.5 Location of "tk.h": /usr/include/tcl8.5 Location of Tcl 8.5 library: not found Location of Tk 8.5 library: not found Checking if your system already has zlib installed: yes. Using Makefile.conf. Not all settings could be determined! The default Makefile was written. You will need to edit it before you can compile Scid. What should I do? Has anybody faced this problem before? Thanks in advance I have run ls -l /usr/include/tcl8.5/tcl.h here's the result: -rw-r--r-- 1 root root 87291 abr 22 10:45 /usr/include/tcl8.5/tcl.h I have also tried what you suggested Could you run git reset --hard HEAD and git clean -d -f to clean up everything using Git? Then run ./configure again. Just a shot in the dark - I've seen some GNU automake stuff still listening to its "cached" version of the results or something. Still no solution. I don't know why it can't recognize the library though it is installed I opened configure to see where the program looked for the library. This is the code: # libraryPath: List of possible locations of Tcl/Tk library. set libraryPath { /usr/lib /usr/lib64 /usr/local/tcl/lib /usr/local/lib /usr/X11R6/lib /opt/tcltk/lib /usr/lib/x86_64-linux-gnu } lappend libraryPath "/usr/lib/tcl${tclv}" lappend libraryPath "/usr/lib/tk${tclv}" lappend libraryPath "/usr/lib/tcl${tclv_nodot}" lappend libraryPath "/usr/lib/tk${tclv_nodot}" # Try to add tcl_library and auto_path values to libraryPath, # in case the user has a non-standard Tcl/Tk library location: if {[info exists ::tcl_library]} { lappend headerPath \ [file join [file dirname [file dirname $::tcl_library]] include] lappend libraryPath [file dirname $::tcl_library] lappend libraryPath $::tcl_library } if {[info exists ::auto_path]} { foreach name $::auto_path { lappend libraryPath $name } } if {! [info exists var(TCL_INCLUDE)]} { puts -nonewline { Location of "tcl.h": } set opt(tcl_h) [findDir "tcl.h" $headerPath "TCL_VERSION.*$tclv"] if {$opt(tcl_h) == ""} { puts "not found" set success 0 set opt(tcl_h) "$::defaultVar(TCL_INCLUDE)" } else { puts $opt(tcl_h) } } set opt(tcl_lib) "" if {! [info exists var(TCL_LIBRARY)]} { puts -nonewline " Location of Tcl $tclv library: " set opt(tcl_lib) [findDir "libtcl${tclv}.*" $libraryPath] if {$opt(tcl_lib) == ""} { set opt(tcl_lib) [findDir "libtcl${tclv_nodot}.*" $libraryPath] if {$opt(tcl_lib) == ""} { puts "not found" set success 0 set opt(tcl_lib) "$opt(tcl_h)" set opt(tcl_lib_file) "tcl\$(TCL_VERSION)" } else { set opt(tcl_lib_file) "tcl${tclv_nodot}" puts $opt(tcl_lib) } } else { set opt(tcl_lib_file) "tcl\$(TCL_VERSION)" puts $opt(tcl_lib) } } if {! [info exists var(TCL_INCLUDE)]} { set var(TCL_INCLUDE) "-I$opt(tcl_h)" } if {! [info exists var(TCL_LIBRARY)]} { set var(TCL_LIBRARY) "-L$opt(tcl_lib) -l$opt(tcl_lib_file)" } return $success So I guess (And by guess I mean i have no idea how to code) I should write somewhere in here usr/lib/tcl8.5 and usr/lib/tk8.5, am I right?

    Read the article

  • F# WPF Form &ndash; the basics

    - by MarkPearl
    I was listening to Dot Net Rocks show #560 about F# and during the podcast Richard Campbell brought up a good point with regards to F# and a GUI. In essence what I understood his point to be was that until one could write an end to end application in F#, it would be a hard sell to developers to take it on. In part I agree with him, while I am beginning to really enjoy learning F#, I can’t but help feel that I would be a lot further into the language if I could do my Windows Forms like I do in C# or VB.NET for the simple reason that in “playing” applications I spend the majority of the time in the UI layer… So I have been keeping my eye out for some examples of creating a WPF form in a F# project and came across Tim’s F# Twitter Stream Sample, which had exactly this…. of course he actually had a bit more than a basic form… but it was enough for me to scrap the insides and glean what I needed. So today I am going to make just the very basic WPF form with all the goodness of a XAML window. Getting Started First thing we need to do is create a new solution with a blank F# application project – I have made mine called FSharpWPF. Once you have the project created you will need to change the project type from a Console Application to a Windows Application. You do this by right clicking on the project file and going to its properties… Once that is done you will need to add the appropriate references. You do this by right clicking on the References in the Solution Explorer and clicking “Add Reference'”. You should add the appropriate .Net references below for WPF & XAMl to work. Once these references are added you then need to add your XAML file to the project. You can do this by adding a new item to the project of type xml and simply changing the file extension from xml to xaml. Once the xaml file has been added to the project you will need to add valid window XAML. Example of a very basic xaml file is shown below… <Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="F# WPF WPF Form" Height="350" Width="525"> <Grid> </Grid> </Window> Once your xaml file is done… you need to set the build action of the xaml file from “None” to “Resource” as depicted in the picture below. If you do not set this you will get an IOException error when running the completed project with a message along the lines of “Cannot locate resource ‘window.xaml’ You then need to tie everything up by putting the correct F# code in the Program.fs to load the xaml window. In the Program.fs put the following code… module Program open System open System.Collections.ObjectModel open System.IO open System.Windows open System.Windows.Controls open System.Windows.Markup [<STAThread>] [<EntryPoint>] let main(_) = let w = Application.LoadComponent(new System.Uri("/FSharpWPF;component/Window.xaml", System.UriKind.Relative)) :?> Window (new Application()).Run(w) Once all this is done you should be able to build and run your project. What you have done is created a WPF based window inside a FSharp project. It should look something like below…   Nothing to exciting, but sufficient to illustrate the very basic WPF form in F#. Hopefully in future posts I will build on this to expose button events etc.

    Read the article

  • Listen to Online Radio with Antenna

    - by Asian Angel
    Are you looking for some fresh new music to listen to at home or at work? With Antenna you can listen to online radio stations from all over the world. Note: Requires Adobe AIR (download link at bottom of article). Antenna in Action Once you have completed the installation and started Antenna up this is the window that you will see. The left side will have a “browsing pane” where you can search for the stations that you would like to listen to using the various categories. Based on the stations that you choose the background map will change location to match the stations locations. Here is a closer look at the “Categories Bar”. For our first example we used the “Country Category” to find our first station to listen to. When you choose a country you will be presented with a list of the stations available for that country. To start listening to a particular station just double click on the appropriate entry line. A closer look at the “browser pane” with our first station playing. Notice the “Reliability Indicator” that will be available for each listing…some may be better than others and you can use this to choose the best streaming stations from the list. In the upper left corner you will notice three icons…each will open a small pop-up window with a specific purpose. The first icon will open up the “About Window”. If you need to contact Antenna’s creator or would like to place a request for a station to be added to the app then this is the best way to do it. The second icon will open up a Antenna specific chat window. The third icon will allow you to set a default location and make adjustments to some of the app’s settings. Recording Audio The “Recording Function” is the only area where we experienced some “quirkiness” with the app. To start recording press the “Round White Button”… Note: Based on feedback on the app creator’s webpage some people have experienced the same problem as we did during our tests with the app failing to complete the recordings. Hopefully this bug will be fixed with the next release. Once recording has started the button will turn red. Click on the button again to stop recording. Once you have stopped recording you will see the following message window appear and the main window will be shaded over with a whitish color until you click “OK”. Conclusion Regardless of the slight quirkiness in recording online music Antenna more than makes up for it with the terrific selection of online stations and streaming capability. New fresh music for you to listen to is only a click or two away… Links Download Antenna (Antenna Homepage) Download Antenna at Softpedia Download Adobe AIR Similar Articles Productive Geek Tips Listen to Local FM Radio in Windows 7 Media CenterListen to Over 100,000 Radio Stations in Windows Media CenterListen To XM Radio with Windows Media Center in Windows 7Listen and Record Over 12,000 Online Radio Stations with RadioSureWeekend Fun: Watch Television on Your PC with AnyTV TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Will it Blend? iPad Edition Penolo Lets You Share Sketches On Twitter Visit Woolyss.com for Old School Games, Music and Videos Add a Custom Title in IE using Spybot or Spyware Blaster When You Need to Hail a Taxi in NYC Live Map of Marine Traffic

    Read the article

  • OSB and Coherence Integration

    - by mark.ms.smith
    Anyone who has tried to manage Coherence nodes or tried to cache results in OSB, will appreciate the new functionality now available. As of WebLogic Server 10.3.4, you can use the WebLogic Administration Server, via the Administration Console or WLST, and java-based Node Manager to manage and monitor the life cycle of stand-alone Coherence cache servers. This is a great step forward as the previous options mainly involved writing your own scripts to do this. You can find an excellent description of how this works at James Bayer’s blog. You can also find the WebLogic documentation here.As of Oracle Service Bus 11gR1 (11.1.1.3.0), OSB now supports service result caching for Business Bervices with Coherence. If you use Business Services that return somewhat static results that do not change often, you can configure those Business Services to cache results. For Business Services that use result caching, you can control the time to live for the cached result. After the cached result expires, the next Business Service call results in invoking the back-end service to get the result. This result is then stored in the cache for future requests to access. I’m thinking that this caching functionality would be perfect for some sort of cross reference data that was refreshed nightly by batch. You can find the OSB Business Service documentation here.Result Caching in a dedicated JVMThis example demonstrates these new features by configuring a OSB Business Service to cache results in a separate Coherence JVM managed by WebLogic. The reason why you may want to use a separate, dedicated JVM is that the result cache data could potentially be quite large and you may want to protect your OSB java heap.In this example, the client will call an OSB Proxy Service to get Employee data based on an Employee Id. Using a Business Service, OSB calls an external system. The results are automatically cached and when called again, the respective results are retrieved from the cache rather than the external system.Step 1 – Set up your Coherence Server Via the OSB Administration Server Console, create your Coherence Server to be used as the results cache.Here are the configured Coherence Server arguments from the Server Start tab. Note that I’m using the default Cache Config and Override files in the domain.-Xms256m -Xmx512m -XX:PermSize=128m -XX:MaxPermSize=256m -Dtangosol.coherence.override=/app/middleware/jdev_11.1.1.4/user_projects/domains/osb_domain2/config/osb/coherence/osb-coherence-override.xml -Dtangosol.coherence.cluster=OSB-cluster -Dtangosol.coherence.cacheconfig=/app/middleware/jdev_11.1.1.4/user_projects/domains/osb_domain2/config/osb/coherence/osb-coherence-cache-config.xml -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dcom.sun.management.jmxremote Just incase you need it, here is my Coherence Server classpath:/app/middleware/jdev_11.1.1.4/oracle_common/modules/oracle.coherence_3.6/coherence.jar: /app/middleware/jdev_11.1.1.4/modules/features/weblogic.server.modules.coherence.server_10.3.4.0.jar: /app/middleware/jdev_11.1.1.4/oracle_osb/lib/osb-coherence-client.jarBy default, OSB will try and create a local result cache instance. You need to disable this by adding the following JVM parameters to each of the OSB Managed Servers:-Dtangosol.coherence.distributed.localstorage=false -DOSB.coherence.cluster=OSB-clusterIf you need more information on configuring a remote result cache, have a look at the configuration documentration under the heading Using an Out-of-Process Coherence Cache Server.Step 2 – Configure your Business Service Under the respective Business Service Message Handling Configuration (Advanced Properties), you need to enable “Result Caching”. Additionally, you need to determine what the cache data will be keyed on. In the example below, I’m keying it on the unique Employee Id.The Results As this test was on my laptop, the actual timings are just an indication that there is a benefit to caching results. Using my test harness, I sent 10,000 requests to OSB, all with the same Employee Id. In this case, I had result caching disabled.You can see that this caused the back end Business Service (BS_GetEmployeeData) to be called for each request. Then after enabling result caching, I sent the same number of identical requests.You can now see the Business Service was only invoked once on the first request. All subsequent requests used the Results Cache.

    Read the article

  • Test and Report Add-on Compatibility in Firefox

    - by Asian Angel
    Now that the new version of Firefox is out you probably have a favorite extension or two that has not updated yet. You can get that extension working again, test it, and report back to Mozilla on how well it does with the Add-on Compatibility Reporter extension. Before For our example we chose a great extension that unfortunately has not been updated yet. As you can see here Firefox is refusing to let the extension install. After As soon as you install Add-on Compatibility Reporter you will be presented with an information page on how the extension works and what you can do with it. You should definitely take a moment to read this as it is very helpful. After trying our non-compatible extension again we were able to proceed with the install process. Notice at the bottom that “compatibility checking” has been overridden. Success! As soon as we restarted our browser it was easy to see the “non-compatible icon” in the “Add-ons Manager Window”…but the extension did install though (terrific!). Clicking on the extension’s entry will reveal a new button in the lower right corner. Using the “Compatibility Drop-Down Menu” you can report if the extension is working as well as before or if it is actually having problems. The extension that we used for our example had no problems whatsoever so good news there. Whichever option you choose you will be presented with a small “Report Window” with information about the extension, your browser’s version number, and your operating system. Click “Submit Report” to send it on its’ way. You will see a confirmation message letting you know that your report was successfully submitted. While the extension itself has not been altered in any form at least you have it working again and have helped verify whether it still works well or not. Notice the “notation” present now in place of the “Compatibility Button” that lets you know that you have already taken care of that particular extension. Looking great… Conclusion If you have a favorite extension that you miss using in the newest release of Firefox then this is definitely an extension to add to your browser. Not only will your extension start working again but you can let Mozilla know how well it is working and (hopefully) help get the extension updated. Links Download the Add-on Compatibility Reporter extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Firefox 3.6 Release Candidate Available, Here’s How to Fix Your Incompatible ExtensionsUsing Windows 7 or Vista Compatibility ModeMysticgeek Blog: Generate A System Health Report In VistaCheck Extension Compatibility for Upcoming Firefox ReleasesMake Safari Stop Crashing Every 20 Seconds on Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet How to Find Your Mac Address Use My TextTools to Edit and Organize Text

    Read the article

  • WEB203 &ndash; Jump into Silverlight!&hellip; and Become Effective Immediately with Tim Huckaby, Fou

    - by Robert Burger
    Getting ready for the good stuff. Definitely wish there were more Silverlight and WCF RIA sessions, but this is a start.  Was lucky to get a coveted power-enabled seat.  Luckily, due to my trustily slow Verizon data card, I can get these notes out amidst a total Internet outage here.  This is the second breakout session of the day, and is by far standing-room only.  I stepped out before the session started to get a cool Diet COKE and wouldn’t have gotten back in if I didn’t already have a seat. Tim says this is an intro session and that he’s been begging for intro sessions at TechEd for years and that by looking at this audience, he thinks the demand is there.  Admittedly, I didn’t know this was an intro session, or I might have gone elsewhere.  But, it was the very first Silverlight session, so I had to be here. Tim says he will be providing a very good comprehensive reference application at the end of the presentation.  He has just demoed it, and it is a full CRUD-based Sales Manager application based on…  AdventureWorks! Session Agenda What it is / How to get started Declarative Programming Layout and Controls, Events and Commands Working with Data Adding Style to Your Application   Silverlight…  “WPF Light” Why is the download 4.2MB?  Because the direct competitor is a 4.2MB download.  There is no technical reason it is not the entire framework.  It is purely to “be competitive”.   Getting Started Get all of the following downloads from www.silverlight.net/getstarted Install VS2010 or Visual Web Developer Express 2010 Install Silverlight 4 Tools for VS2010 Install Expression Blend 4 Install the Silverlight 4 Toolkit   Reference Application Features Uses MVVM pattern – a way to move data access code that would normally be inline within the UI and placing it in nice data access libraries Images loaded dynamically from the database, converting GIF to PNG because Silverlight does not support GIF. LINQ to SQL is the data access model WCF is the data provider and is using binary message encoding   Declarative Programming XAML replaces code for UI representation Attributes control Layout and Style Event handlers wired-up in XAML Declarative Data Binding   Layout Overview Content rendering flows inside of parent Fixed positioning (Canvas) is seldom used Panels are used to house content Margins and Padding over fixed size   Panels StackPanel – Arranges child elements into a single line oriented horizontally or vertically Grid – A flexible grid are that consists of rows and columns Canvas – An are where positions are specifically fixed WrapPanel (in Toolkit) – Positions child elements in sequential position left to right and top to bottom. DockPanel (in Toolkit) – Positions child controls within a dockable area   Positioning Horizontal and Vertical Alignment Margin – Separates an element from neighboring elements Padding – Enlarges the effective size of an element by a thickness   Controls Overview Not all controls created equal Silverlight, as a subset of WPF, so many WPF controls do not exist in the core Siverlight release Silverlight Toolkit continues to add controls, but are released in different quality bands Plenty of good 3rd party controls to fill the gaps Windows Phone 7 is to have 95% of controls available in Silverlight Core and Toolkit.   Events and Commands Standard .NET Events Routed Events Commands – based on the ICommand interface – logical action that can be invoked in several ways   Adding Style to Your Application Resource Dictionaries – Contains a hash table of key/value pairs.  Silverlight can only use Static Resources whereas WPF can also use Dynamic Resources Visual State Manager Silverlight 4 supports Implicit styles ResourceDictionary.MergedDictionaries combines many different file-based resources   Downloads

    Read the article

  • Using progress dialog in Visual Studio extensions

    - by Utkarsh Shigihalli
    Originally posted on: http://geekswithblogs.net/onlyutkarsh/archive/2014/05/23/using-progress-dialog-in-visual-studio-extensions.aspxAs a Visual Studio extension developer you are required to keep the aesthetics of Visual Studio in tact when you integrate your extension with Visual Studio. Your extension looks odd when you try to use windows controls and dialogs in your extensions. Visual Studio SDK exposes many interfaces so that your extension looks as integrated with Visual Studio as possible. When your extension is performing a long running task, you have many options to notify the progress to the user. One such option is through Visual Studio status bar. I have previously blogged about displaying progress through Visual Studio status bar. In this blog post I am going to highlight another way using IVsThreadedWaitDialog2 interface. One thing to note is, as the IVsThreadedWaitDialog2 interface name suggests it is a dialog hence user cannot perform any action when the dialog is being shown. So Visual Studio seems responsive to user, even when a task is being performed. Visual Studio itself makes use of this interface heavily. One example is when you are loading a solution (.sln) with lot of projects Visual Studio displays dialog implemented by this interface (screenshot below). So the first step is to get the instance of IVsThreadedWaitDialog2 interface using IServiceProvider interface. var dialogFactory = _serviceProvider.GetService(typeof(SVsThreadedWaitDialogFactory)) as IVsThreadedWaitDialogFactory; IVsThreadedWaitDialog2 dialog = null; if (dialogFactory != null) { dialogFactory.CreateInstance(out dialog); } So if your have the package initialized properly out object dialog will be not null and would contain the instance of IVsThreadedWaitDialog2 interface. Once the instance is got, you call the different methods to manage the dialog. I will cover 3 methods StartWaitDialog, EndWaitDialog and HasCanceled in this blog post. You show the progress dialog as below. if (dialog != null && dialog.StartWaitDialog( "Threaded Wait Dialog", "VS is Busy", "Progress text", null, "Waiting status bar text", 0, false, true) == VSConstants.S_OK) { Thread.Sleep(4000); } As you can see from the method syntax it is very similar to standard windows message box. If you pass true to the 7th parameter to StartWaitDialog method, you will also see a cancel button allowing user to cancel the running task. You can react when user cancels the task as below. bool isCancelled; dialog.HasCanceled(out isCancelled); if (isCancelled) { MessageBox.Show("Cancelled"); } Finally, you can close the dialog when you complete the task running as below. int usercancel; dialog.EndWaitDialog(out usercancel); To help you quickly experience the above code, I have created a sample. It is available for download from GitHub. The sample creates a tool window with two buttons to demo the above explained scenarios. The tool window can be accessed by clicking View –> Other Windows -> ProgressDialogDemo Window

    Read the article

  • Java Developer Days India Trip Report

    - by reza_rahman
    You are probably aware of Oracle's decision to discontinue the relatively resource intensive regional JavaOnes in favor of more Java Developer Days, virtual events and deeper involvement with independent conferences. In comparison to the regional JavaOnes, Java Developer Days are smaller, shorter (typically one full day), more focused (mostly Oracle speakers/topics) and more local (targeting cities). For those who have been around the Java ecosystem for a few years, they are basically the current incarnation of the highly popular and developer centric Sun Tech Days. October 21st through October 25th I spoke at Java Developer Days India. This was basically three separate but identical events in the cities of Pune (October 21st), Chennai (October 24th) and Bangalore (October 25th). For those with some familiarity with India, other than Hyderabad these cities are India's IT powerhouses. The events were basically focused on Java EE. I delivered five of the sessions (yes, you read that right), while my friend NetBeans Group Product Manager Ashwin Rao delivered three talks. Jagadish Ramu from the GlassFish team India helped me out in Bangalore by delivering two sessions. It was also a pleasure to introduce my co-contributor to the Cargo Tracker Java EE Blue Prints project Vijay Nair at Bangalore during the opening talk. I thought it was a great dynamic between Ashwin and I flipping between talking about the new features and demoing live code in NetBeans. The following were my sessions (source PDF and abstracts posted as usual on my SlideShare account): JavaEE.Next(): Java EE 7, 8, and Beyond Building Java HTML5/WebSocket Applications with JSR 356 What’s New in Java Message Service 2 JAX-RS 2: New and Noteworthy in the RESTful Web Services API Using NoSQL with JPA, EclipseLink and Java EE The event went well and was packed in all three cities. The Q&A was great and Indian developers were particularly generous with kind words :-). It seemed the event and our presence was appreciated in the truest sense which I must say is a rarity. The events were exhausting but very rewarding at the same time. As hectic as the three city trip was I tried to see at least some of the major sights (mostly at night) since this was my very first time to India. I think the slideshow below is a good representation of the riddle wrapped up in an enigma that is India (and the rest of the Indian sub-continent for that matter): Ironically enough what struck me the most during this trip is the woman pictured below - Shushma. My chauffeur, tour guide and friend for a day, she fluidly navigated the madness that is Mumbai traffic with skills that would make Evel Knievel blush while simultaneously pointing out sights and prompting me to take pictures (Mumbai was my stopover and gateway to/from India). In some ways she is probably the most potent symbol of the new India. When we parted ways I told her she should take solace in the fact she has won mostly without a fight a potentially hazardous battle her sisters across the Arabian sea are still fighting. I'm not sure she entirely understood the significance of what I told her. I hope that she did. I also had occasion to take a pretty cool local bus ride from Chennai to Bangalore instead of yet another boring flight. All in all I really enjoyed the trip to India and hope to return again soon. Jai Hind :-)!

    Read the article

  • SharePoint 2010 Sandboxed solution SPGridView

    - by Steve Clements
    If you didn’t know, you probably will soon, the SPGridView is not available in Sandboxed solutions. To be honest there doesn’t seem to be a great deal of information out there about the whys and what nots, basically its not part of the Sandbox SharePoint API. Of course the error message from SharePoint is about as useful as punch in the face… An unexpected error has been encountered in this Web Part.  Error: A Web Part or Web Form Control on this Page cannot be displayed or imported. You don't have Add and Customize Pages permissions required to perform this action …that’s if you have debug=true set, if not the classic “This webpart cannot be added” !! Love that one! but will a little digging you should find something like this… [TypeLoadException: Could not load  type Microsoft.SharePoint.WebControls.SPGridView from assembly 'Microsoft.SharePoint, Version=14.900.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c'.]   Depending on what you want to do with the SPGridView, this may not help at all.  But I’m looking to inherit the theme of the site and style it accordingly. After spending a bit of time with Chrome’s FireBug I was able to get the required CSS classes.  I created my own class inheriting from GridView (note the lack of a preceding SP!) and simply set the styles in there. Inherit from the standard GridView public class PSGridView : GridView   Set the styles in the contructor… public PSGridView() {     this.CellPadding = 2;     this.CellSpacing = 0;     this.GridLines = GridLines.None;     this.CssClass = "ms-listviewtable";     this.Attributes.Add("style", "border-bottom-style: none; border-right-style: none; width: 100%; border-collapse: collapse; border-top-style: none; border-left-style: none;");       this.HeaderStyle.CssClass = "ms-viewheadertr";          this.RowStyle.CssClass = "ms-itmhover";     this.SelectedRowStyle.CssClass = "s4-itm-selected";     this.RowStyle.Height = new Unit(25); }   Then as you cant override the Columns property setter, a custom method to add the column and set the style… public PSGridView() {     this.CellPadding = 2;     this.CellSpacing = 0;     this.GridLines = GridLines.None;     this.CssClass = "ms-listviewtable";     this.Attributes.Add("style", "border-bottom-style: none; border-right-style: none; width: 100%; border-collapse: collapse; border-top-style: none; border-left-style: none;");       this.HeaderStyle.CssClass = "ms-viewheadertr";          this.RowStyle.CssClass = "ms-itmhover";     this.SelectedRowStyle.CssClass = "s4-itm-selected";     this.RowStyle.Height = new Unit(25); }   And that should be enough to get the nicely styled SPGridView without the need for the SPGridView, but seriously….get the SPGridView in the SandBox!!!   Technorati Tags: Sharepoint 2010,SPGridView,Sandbox Solutions,Sandbox Technorati Tags: Sharepoint 2010,SPGridView,Sandbox Solutions,Sandbox

    Read the article

  • Web.config WordPress rewrite rules next to Magento

    - by Flo
    I've installed Magento on IIS in folder: E:\mydomain\wwwroot (I already have it all running correctly). I have no deeper folder magento, I placed all files directly in the wwwroot folder, so: wwwroot\app wwwroot\downloader wwwroot\errors wwwroot\includes etc... UPDATE: since I'm on IIS my .htaccess is ignored completely and my web.config rules are used instead. Here's my web.config in folder e:\mydomain\wwwroot: <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <rewrite> <rules> <rule name="Magento SEO: remove index.php from URL"> <match url="^(?!index.php)([^?#]*)(\\?([^#]*))?(#(.*))?" /> <conditions> <add input="{URL}" pattern="^/(media|skin|js)/" ignoreCase="false" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsFile" ignoreCase="false" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" ignoreCase="false" negate="true" /> </conditions> <action type="Rewrite" url="index.php/{R:0}" /> </rule> </rules> </rewrite> </system.webServer> </configuration> Next, I wanted to install WordPress. I unzipped all files in folder e:\mydomain\wwwroot\wordpress Browsed to www.mydomain.com/wordpress/wp-admin/install.php, where I configured everything for my database. Everything was installed correctly. I then navigate to http://www.mydomain.com/wordpress/wp-login.php where I type my credentials. I seem to be logged in and am redirected to http://www.mydomain.com/wordpress/wp-admin/ But there I receive an empty page. I enabled detailed error message in IIS following this article: http://www.iis.net/learn/troubleshoot/diagnosing-http-errors/how-to-use-http-detailed-errors-in-iis I also checkec with Fiddler and see that I receive a 500 error: GET /wordpress/wp-admin/ HTTP/1.1 Host: www.mydomain.com Connection: keep-alive Cache-Control: max-age=0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/29.0.1547.76 Safari/537.36 Referer: http://www.mydomain.com/wordpress/wp-login.php Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8,nl;q=0.6 Cookie: wordpress_fabec4083cf12d8de89c98e8aef4b7e3=floran%7C1381236774%7C2d8edb4fc6618f290fadb49b035cad31; wordpress_test_cookie=WP+Cookie+check; wordpress_logged_in_fabec4083cf12d8de89c98e8aef4b7e3=floran%7C1381236774%7Cbf822163926b8b8df16d0f1fefb6e02e HTTP/1.1 500 Internal Server Error Content-Type: text/html Server: Microsoft-IIS/7.5 X-Powered-By: PHP/5.4.14 X-Powered-By: ASP.NET Date: Sun, 06 Oct 2013 12:56:03 GMT Content-Length: 0 My WordPress web.config in folder e:\mydomain\wwwroot\wordpress contains: <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <rewrite> <rules> <rule name="wordpress" patternSyntax="Wildcard"> <match url="*"/> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true"/> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true"/> </conditions> <action type="Rewrite" url="index.php"/> </rule></rules> </rewrite> </system.webServer> </configuration> I also want my WordPress articles to be available on www.mydomain.com/blog instead of www.mydomain.com/wordpress Ofcourse my admin links for Magento and Wordpress should also work. How can I configure my web.config files to achieve the above?

    Read the article

  • No database selected error in CodeIgniter running on MAMP stack

    - by Apophenia Overload
    First off, does anyone know of a good place to get help with CodeIgniter? The official community forums are somewhat disappointing in terms of getting many responses. I have ci installed on a regular MAMP stack, and I’m working on this tutorial. However, I have only gone through the Created section, and currently I am getting a No database selected error. Model: <?php class submit_model extends Model { function submitForm($school, $district) { $data = array( 'school' => $school, 'district' => $district ); $this->db->insert('your_stats', $data); } } View: <?php $this->load->helper('form'); ?> <?php echo form_open('main'); ?> <p> <?php echo form_input('school'); ?> </p> <p> <?php echo form_input('district'); ?> </p> <p> <?php echo form_submit('submit', 'Submit'); ?> </p> <?php echo form_close(); ?> Controller: <?php class Main extends controller { function index() { // Check if form is submitted if ($this->input->post('submit')) { $school = $this->input->xss_clean($this->input->post('school')); $district = $this->input->xss_clean($this->input->post('district')); $this->load->model('submit_model'); // Add the post $this->submit_model->submitForm($school, $district); } $this->load->view('main_view'); } } database.php $db['default']['hostname'] = "localhost:8889"; $db['default']['username'] = "root"; $db['default']['password'] = "root"; $db['default']['database'] = "stats_test"; $db['default']['dbdriver'] = "mysql"; $db['default']['dbprefix'] = ""; $db['default']['pconnect'] = TRUE; $db['default']['db_debug'] = TRUE; $db['default']['cache_on'] = FALSE; $db['default']['cachedir'] = ""; $db['default']['char_set'] = "utf8"; $db['default']['dbcollat'] = "utf8_general_ci"; config.php $config['base_url'] = "http://localhost:8888/ci/"; ... $config['index_page'] = "index.php"; ... $config['uri_protocol'] = "AUTO"; So, how come it’s giving me this error message? A Database Error Occurred Error Number: 1046 No database selected INSERT INTO `your_stats` (`school`, `district`) VALUES ('TJHSST', 'FairFax') Is there any way for me to test if CodeIgniter can actually detect the mySQL databases I've created with phpMyAdmin in my MAMP stack?

    Read the article

  • SQL SERVER – Discard Results After Query Execution – SSMS

    - by pinaldave
    The first thing I do any day is to turn on the computer. Today I woke up and as soon as I turned on the computer I saw a chat message from a friend. He was a bit confused and wanted me to help him. Just as usual I am keeping the relevant conversation in focus and documenting our conversation as chat. Let us call him Ajit. Ajit: Pinal, every time I run a query there is no result displayed in the SSMS but when I run the query in my application it works and returns an appropriate result. Pinal:  Have you tried with different parameters? Ajit: Same thing. However, it works from another computer when I connect to the same server with the same query parameters? Pinal: What? That is new and I believe it is something to do with SSMS and not with the server. Send me screenshot please. Ajit: I believe so, let me send you a screenshot, Pinal: (looking at the screenshot) Oh man, there is no result-tab at all. Ajit: That is what the problem is. It does not have the tab which displays the result. This works just fine from another computer. Pinal: Have you referred Nakul’s blog post – SSMS – Query result options – Discard result after query executes, that talks about setting which can discard the query results after execution. (After a while) Ajit: I think it seems like on the computer where I am running the query my SSMS seems to have the option enabled related to discarding results. I fixed it by following Nakul’s blog post. Pinal: Great! Quite often I get the question what is the importance of the feature. Let us first see how to turn on or turn off this feature in SQL Server Management Studio 2012. In SSMS 2012 go to Tools >> Options >> Query Results > SQL Server >> Results to Grid >> Discard Results After Query Execution. When enabled this option will discard results after the execution. The advantage of disabling the option is that it will improve the performance by using less memory. However the real question is why would someone enable or disable the option. What are the cases when someone wants to run the query but do not care about the result? Matter of the fact, it does not make sense at all to run query and not care about the result. The matter of the fact, I can see quite a few reasons for using this option. I often enable this option when I am doing performance tuning exercise. During performance tuning exercise when I am working with execution plans and do not need results to verify every time or when I am tuning Indexes and its effect on execution plan I do not need the results. In this kind of situations I do keep this option on and discard the results. It always helps me big time as in most of the performance tuning exercise I am dealing with huge amount of the data and dealing with this data can be expensive. Nakul’s has done the experiment here already but I am going to repeat the same again using AdventureWorks Database. Run following T-SQL Script with and without enabling the option to discard the results. USE AdventureWorks2012 GO SELECT * FROM Sales.SalesOrderDetail GO 10 After enabling Discard Results After Query Execution After disabling Discard Results After Query Execution Well, this is indeed a good option when someone is debugging the execution plan or does not want the result to be displayed. Please note that this option does not reduce IO or CPU usage for SQL Server. It just discards the results after execution and a good help for debugging on the development server. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Editor's Notebook - Social Aura: Insights from the Oracle Social Media Summit

    - by user462779
    Panelists talk social marketing at the Oracle Social Media Summit On November 14, I traveled to Las Vegas for the first-ever Oracle Social Media Summit. The two day event featured an impressive collection of social media luminaries including: David Kirkpatrick (founder and CEO of Techonomy Media and author of The Facebook Effect), John Yi (Head of Marketing Partnerships, Facebook), Matt Dickman (EVP of Social Business Innovation, Weber Shandwick), and Lyndsay Iorio (Social Media & Communications Manager, NBC Sports Group) among others. It was also a great opportunity to talk shop with some of our new Vitrue and Involver colleagues who have been returning great social media results even before their companies were acquired by Oracle. I was live tweeting the event from @OracleProfit which was great for those who wanted to follow along with the proceedings from the comfort of their office or blackjack table. But I've also found over the years that live tweeting an event is a handy way to take notes: I can sift back through my record of what people said or thoughts I had at the time and organize the Twitter messages into some kind of summary account of the proceedings. I've had nearly a month to reflect on the presentations and conversations at the event and a few key topics have emerged: David Kirkpatrick's comment during the opening presentation really set the stage for the conversations that followed. Especially if you are a marketer or publisher, the idea that you are in a one-way broadcast relationship with your audience is a thing of the past. "Rising above the noise" does not mean reaching for a megaphone, ALL CAPS, or exclamation marks. Hype will not motivate social media denizens to do anything but unfollow and tune you out. But knowing your audience, creating quality content and/or offers for them, treating them with respect, and making an authentic effort to please them: that's what I believe is now necessary. And Kirkpatrick's comment early in the day really made the point. Later in the day, our friends @Vitrue demonstrated this point by elaborating on a comment by Facebook's John Yi. If a social strategy is comprised of nothing more than cutting/pasting the same message into different social media properties, you're missing the opportunity to have an actual conversation. That's not shouting at your audience, but it does feel like an empty gesture. Walter Benjamin, perplexed by auraless Twitter messages Not to get too far afield, but 20th century cultural critic Walter Benjamin has a concept that is useful for understanding the dynamics of the empty social media gesture: Aura. In his work The Work of Art in the Age of Mechanical Reproduction, Benjamin struggled to understand the difference he percieved between the value of a hand-made art object (a painting, wood cutting, sculpture, etc.) and a photograph. For Benjamin, aura is similar to the "soul" of an artwork--the intangible essence that is created when an artist picks up a tool and puts creative energy and effort into a work. I'll defer to Wikipedia: "He argues that the "sphere of authenticity is outside the technical" so that the original artwork is independent of the copy, yet through the act of reproduction something is taken from the original by changing its context. He also introduces the idea of the "aura" of a work and its absence in a reproduction." So make sure you put aura into your social interactions. Don't just mechanically reproduce them. Keeping aura in your interactions requires the intervention of an actual human being. That's why @NoahHorton's comment about content curation struck me as incredibly important. Maybe it's just my own prejudice, being in the content curation business myself. And it's not to totally discount machine-aided content management systems, content recommendation engines, and other tech-driven tools for building an exceptional content experience. It's just that without that human interaction--that editor who reviews the analytics and responds to user feedback--interactions over social media feel a bit empty. It is SOCIAL media, right? (We'll leave the conversation about social machines for another day). At the end of the day, experimentation is key. Just like trying to find that right joke to tell at the beginning of your presentation or that good opening like at a cocktail party, social media messages and interactions can take some trial and error. Don't be afraid to try things, tinker with incomplete ideas, abandon things that don't work, and engage in the conversation. And make sure your heart is in it, otherwise your audience can tell. And finally:

    Read the article

  • BIP BIServer Query Debug

    - by Tim Dexter
    With some help from Bryan, I have uncovered a way of being able to debug or at least log what BIServer is doing when BIP sends it a query request. This is not for those of you querying the database directly but if you are using the BIServer and its datamodel to fetch data for a BIP report. If you have written or used the query builder against BIServer and when you run the report it chokes with a cryptic message, that you have no clue about, read on. When BIP runs a piece of BIServer logical SQL to fetch data. It does not appear to validate it, it just passes it through, so what is BIServer doing on its end? As you may know, you are not writing regular physical sql its actually logical sql e.g. select Jobs."Job Title" as "Job Title", Employees."Last Name" as "Last Name", Employees.Salary as Salary, Locations."Department Name" as "Department Name", Locations."Country Name" as "Country Name", Locations."Region Name" as "Region Name" from HR.Locations Locations, HR.Employees Employees, HR.Jobs Jobs The tables might not even be a physical tables, we don't care, that's what the BIServer and its model are for. You have put all the effort into building the model, just go get me the data from where ever it might be. The BIServer takes the logical sql and uses its vast brain to work out what the physical SQL is, executes it and passes the result back to BIP. select distinct T32556.JOB_TITLE as c1, T32543.LAST_NAME as c2, T32543.SALARY as c3, T32537.DEPARTMENT_NAME as c4, T32532.COUNTRY_NAME as c5, T32577.REGION_NAME as c6 from JOBS T32556, REGIONS T32577, COUNTRIES T32532, LOCATIONS T32569, DEPARTMENTS T32537, EMPLOYEES T32543 where ( T32532.COUNTRY_ID = T32569.COUNTRY_ID and T32532.REGION_ID = T32577.REGION_ID and T32537.DEPARTMENT_ID = T32543.DEPARTMENT_ID and T32537.LOCATION_ID = T32569.LOCATION_ID and T32543.JOB_ID = T32556.JOB_ID ) Not a very tough example I know but you get the idea. How do I know what the BIServer is up to? How can I find out what the issue might be if BIServer chokes on my query? There are a couple of steps: In the Administrator tool you need to set the logging level for the Administrator user to something greater than the default '0'. '7' is going to give you the max. Just remember to take it back down after you have finished the debug. I needed to bounce my BIServer service Now here's the secret sauce. Prefix the following to your BIP query set variable LOGLEVEL = 7; Set the log level to that you have in the admin tool Now run your BIP report. With the prefix in place; BIServer will write to the NQQuery.log file. This is located in the ./OracleBI/server/Log directory. In there you are going to find the complete process the BIServer has gone through to try and get the data back for you A quick note, if the BIServer can, its going to hit that great BIEE cache to get your data and you may not see the full log. IF this is the case. Get inot hte Administration page (via the browser login) and clear out your BIP report cursor. Then re-run. This will hopefully help out if you are trying to debug that annoying BIP report that will not run or is getting some strange data. Don't forget to turn that logging level back down once you are done. This will avoid the DBA screaming at you for sucking up all the disk space on the system.

    Read the article

  • What I've Gained from being a Presenter at Tech Events

    - by MOSSLover
    Originally posted on: http://geekswithblogs.net/MOSSLover/archive/2014/06/12/what-ive-gained-from-being-a-presenter-at-tech-events.aspxI know I fail at blogging lately.  As I've said before life happens and it gets in the way of best laid out plans.  I thought about creating some type of watch with some app that basically dictates using dragon naturally speaking to Wordpress, but alas no time and the write processing capabilities just don't exist yet.So to get to my point Alison Gianotto created this blog post: http://www.snipe.net/2014/06/why-you-should-stop-stalling-and-start-presenting/.  I like the message she has stated in this post and I want to share something personal.It was around 2007 that I was seriously looking into leaving the technology field altogether and going back to school.  I was calling places like Washington University in Saint Louis and University of Missouri Kansas City asking them how I could go about getting into some type of post graduate medical school program.  My entire high school career was based on Medical Explorers and somehow becoming a doctor.  I did not want to take my hobby and continue using it as a career mechanism.  I was unhappy, but I didn't realize why I was unhappy at the time.  It was really a lot of bad things involving the lack of self confidence and self esteem.  Overall I was not in a good place and it took me until 2011 to realize that I still was not in a good place in life.So in about April 2007 or so I started this blog that you guys have been reading or occasionally read.  I kind of started passively stalking people by reading their blogs in the SharePoint and .Net communities.  I also started listening to .Net Rocks & watching videos on their corresponding training for SharePoint, WCF, WF, and a bunch of other technologies.  I wanted more knowledge, so someone suggested I go to a user group.  I've told this story before about how I met Jeff Julian & John Alexander, so that point I will spare you the details.  You know how I got to my first user group presentation and how I started getting involved with events, so I'll also spare those details.The point I want to touch on is that I went out I started speaking and that path I took helped me gain the self confidence and self respect I needed.  When I first moved to NYC I couldn't even ride a subway by myself or walk alone without getting lost.  Now I feel like I can go out and solve any problem someone throws at me.  So you see what Alison states in her blog post is true and I am a great example to that point.  I stood in front of 800 or so people at SharePoint Conference in 2011 and spoke about a topic.  In 2007 I would have hidden or stuttered the entire time.  I have now spoken at over 70 events and user groups.  I am a top 25 influencer in my technology.  I was a most valued professional for years in a row in Microsoft SharePoint.  People are constantly trying to gain my time, so that they can pick my brain for solutions and other life problems.  I went from maybe five or six friends to over hundreds of friends in various cities across the globe.  I'm not saying it's an instant fame and it doesn't take a ton of work, but I have never looked back once at my life and regretted the choice I made in 2007.  It has lead me to a lot of other things in my life, including more positivity and happiness.  If anyone ever wants to contact me and pick my brain on a presentation go ahead.  If you want me to help you find the best meetup that suits you for that presentation I can try to help too (I might be a little more helpful in the Microsoft or iOS arenas though).  The best thing I can state is don't be scared just do it.  If you need an audience I can try to pencil it in my schedule.  I can't promise anything, but if you are in NYC I can at least try.

    Read the article

  • Combining Scrum, TFS2010 and Email to keep everyone in the loop

    - by Martin Hinshelwood
    Often you will receive rich information from your Product Owner (Customer) about tasks. That information can be in the form of Word documents, HTML Emails and Pictures, but you generally receive them in the context of an Email. You need to keep these so your Team can refer to it later, and so you can send a “done” when the task has been completed. This preserves the “history” of the task and allows you to keep relevant partied included in any future conversation. At SSW we keep the original email so that we can reply Done and delete the email. But keeping it in your email does not help other members of the team if they complete the task and need to send the “done”. Worse yet, the description field in Team Foundation Server 2010 (TFS 2010) does not support HTML and images, nor does the default task template support an “interested parties” or CC field. You can attach this content manually, but it can be time consuming. Figure: Description only supports plain text, and History supports HTML with no images   What should we do? At SSW we always follow the rules, and it just so happened that we have rules to both achieve this, and to make it easier. You should follow the existing Rules to Better Project Management  and attach the email to your task so you can refer to and reply to it later when you close the task: Do you know what Outlook add-ins you need? Describe the work item request in an email Use Outlook Add-in to move the email to a TFS Work Item When replying to an email with “done” you should follow: Do you update Team Companion template, so the email "subject" doesn't change? Do you update Team Companion template, so you can generate a proper "done" mail? Following these simple rules will help your Product Owner understand you better, and allow your team to more effectively collaborate with each other. An added bonus is that as we are keeping the email history in sync with TFS. When you “reply all” to the email all of the interested partied to the Task are also included. This notified those that may have been blocked by your task to keep up to date with its status. This has been published as Do you know to ensure that relevant emails are attached to tasks in our Rules to Better Scrum using TFS. What could we do better? I would like to see this process automated so that we capture the information correctly in the task without the need to use email. This would require a change to the process template in Team Foundation Server to add an “Interested Parties” field. Each reply to the email would need to be automatically processed into a Work Item. This could be done by adding a task identifier as the first item in the “Relates to” email header, and copying in an email address that you watch. This would then parse out the relevant information and add the new message to the history, update the “Interested parties” field and attach the Images. Upon reflection, it may even be possible, but more difficult to do this using ONLY the History field and including some of the header information in there to the build a done email with history. This would not currently deal with email “forks” well, but I think it would be adequate for our needs. It would be nice if we could find time to implement this, but currently it is but a pipe dream. Maybe Microsoft could implement something in the next version of Team Foundation Server, and in the mean time we have a process that works well. Technorati Tags: Scrum,SSW Rules,TFS 2010,TFS 2008

    Read the article

  • Dual Screen will only mirror after 12.04 upgrade

    - by Ne0
    I have been using Ubuntu with a dual screen for years now, after upgrading to 12.04 LTS i cannot get my dual screen working properly Graphics: 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI RV350 AR [Radeon 9600] 01:00.1 Display controller: Advanced Micro Devices [AMD] nee ATI RV350 AR [Radeon 9600] (Secondary) I noticed i was using open source drivers and attempted to install official binaries using the methods in this thread. Output: liam@liam-desktop:~$ sudo apt-get install fglrx fglrx-amdcccle Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be upgraded: fglrx fglrx-amdcccle 2 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. Need to get 45.1 MB of archives. After this operation, 739 kB of additional disk space will be used. Get:1 http://gb.archive.ubuntu.com/ubuntu/ precise/restricted fglrx i386 2:8.960-0ubuntu1 [39.2 MB] Get:2 http://gb.archive.ubuntu.com/ubuntu/ precise/restricted fglrx-amdcccle i386 2:8.960-0ubuntu1 [5,883 kB] Fetched 45.1 MB in 1min 33s (484 kB/s) (Reading database ... 328081 files and directories currently installed.) Preparing to replace fglrx 2:8.951-0ubuntu1 (using .../fglrx_2%3a8.960-0ubuntu1_i386.deb) ... Removing all DKMS Modules Error! There are no instances of module: fglrx 8.951 located in the DKMS tree. Done. Unpacking replacement fglrx ... Preparing to replace fglrx-amdcccle 2:8.951-0ubuntu1 (using .../fglrx-amdcccle_2%3a8.960-0ubuntu1_i386.deb) ... Unpacking replacement fglrx-amdcccle ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Setting up fglrx (2:8.960-0ubuntu1) ... update-alternatives: warning: forcing reinstallation of alternative /usr/lib/fglrx/ld.so.conf because link group i386-linux-gnu_gl_conf is broken. update-alternatives: warning: skip creation of /etc/OpenCL/vendors/amdocl64.icd because associated file /usr/lib/fglrx/etc/OpenCL/vendors/amdocl64.icd (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalcl.so because associated file /usr/lib32/fglrx/libaticalcl.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalrt.so because associated file /usr/lib32/fglrx/libaticalrt.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: forcing reinstallation of alternative /usr/lib/fglrx/ld.so.conf because link group i386-linux-gnu_gl_conf is broken. update-alternatives: warning: skip creation of /etc/OpenCL/vendors/amdocl64.icd because associated file /usr/lib/fglrx/etc/OpenCL/vendors/amdocl64.icd (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalcl.so because associated file /usr/lib32/fglrx/libaticalcl.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalrt.so because associated file /usr/lib32/fglrx/libaticalrt.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-initramfs: deferring update (trigger activated) update-initramfs: Generating /boot/initrd.img-3.2.0-25-generic-pae Loading new fglrx-8.960 DKMS files... Building only for 3.2.0-25-generic-pae Building for architecture i686 Building initial module for 3.2.0-25-generic-pae Done. fglrx: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/3.2.0-25-generic-pae/updates/dkms/ depmod....... DKMS: install completed. update-initramfs: deferring update (trigger activated) Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Setting up fglrx-amdcccle (2:8.960-0ubuntu1) ... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-25-generic-pae Processing triggers for libc-bin ... ldconfig deferred processing now taking place liam@liam-desktop:~$ sudo aticonfig --initial -f aticonfig: No supported adapters detected When i attempt to get my settings back to what they were before upgrading i get this message requested position/size for CRTC 81 is outside the allowed limit: position=(1440, 0), size=(1440, 900), maximum=(1680, 1680) and GDBus.Error:org.gtk.GDBus.UnmappedGError.Quark._gnome_2drr_2derror_2dquark.Code3: requested position/size for CRTC 81 is outside the allowed limit: position=(1440, 0), size=(1440, 900), maximum=(1680, 1680) Any idea's on what i need to do to fix this issue?

    Read the article

  • Analysis Services (SSAS) - Unexpected Internal Error when processing (ProcessUpdate). Workaround/Resolution

    - by James Rogers
    Many implementations require the use of ProcessUpdate to support Type 1 slowly changing dimensions. ProcessUpdate drops all of the affected indexes and aggregations in partitions affected by data that changes in the Dimension on which the ProcessUpdate is being performed. Twice now I have had situations where the processing fails with "Internal error: An unexpected exception occurred." Any subsequent ProcessUpdate processing will also fail with the same error. In talking with Microsoft the issue is corrupt indexes for the Dimension(s) being processed in the partitions of the affected measure group. I cannot guarantee that the following will correct your problem but it did in my case and saved us quite a bit of down time.   Workaround: ProcessIndexes on the entire cube that is being processed and throwing the error. This corrected the problem on both 2008 and 2008 R2.   Pros:  Does not require a complete rebuild of the data (ProcessFull) for either the Dimension or Cube. User access can continue while this ProcessIndexes in underway.   Cons: Can take a long time, especially on large cubes with many partitions, dimensions and/or aggregations. Query Performance is usually severely impacted due to the memory and CPU requirements for Aggregation and Index building   <Batch http://schemas.microsoft.com/analysisservices/2003/engine"http://schemas.microsoft.com/analysisservices/2003/engine">  <Parallel>     <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200">       <Object>         <DatabaseID>MyDatabase</DatabaseID>         <CubeID>MyCube</CubeID>       </Object>       <Type>ProcessIndexes</Type>       <WriteBackTableCreation>UseExisting</WriteBackTableCreation>     </Process>  </Parallel> </Batch>   The cube where the corruption exists can be found by having Profiler running while the ProcessUpdate is executing. The first partition that displays the "The Job has ended in failure." message in the TextData column will be part of the cube/measuregroup that has the corruption. You can try to run ProcessIndexes on just that measure group. This may correct the problem and save additional time if you have other large measure groups in the cube that are not affected by the corruption.   Remember to execute your normal ProcessUpdate batch after the successful completion of the ProcessIndexes. The ProcessIndexes does not pick up data changes.   Things that did not work: ProcessClearIndexes - why this doesn't work and ProcessIndexes does is unclear at this point. ProcessFull on the partition in question. In my latest case, this would clear up the problem for that partition. However, the next partition the ProcessUpdate touched that had data in it would generate and error. This leads me to believe the corruption problem will exist in all partitions in the affected measure group that have data in them.   NOTE: I experience this problem in both a SQL 2008 and SQL 2008 R2 Analysis Services environment, on separate built from the same relational database. This leads me to believe that some data condition in the tables used for the Dimension processing caused the corruption since the two environments were on physically separate hardware. I am waiting on Microsoft to analyze the dumps to give us more insight into what actually caused the corruption and will update this post accordingly.

    Read the article

  • How do I fix a corrupted harddrive after failed upgrade?

    - by Nil
    The problem originated when I was trying to fix this problem. Things went horribly, horribly wrong and I ended up with a new problem altogether. The last thing I did was run sudo apt-get install and that caused my system to freeze. I restarted my computer and it would not boot from the harddrive. I ran a copy of Ubuntu 12.10 from a flashdrive that I had and ran gparted to see if my partitions were all there. It returned this message: Invalid partition table on /dev/sda -- wrong signature 5208. The drive appeared as a 2TiB unallocated drive with an error. The drive had 4 partitions before (plus random unallocated space). There was a fat32 partition, an ext4 partition which contained ubuntu 13.04/13.10 (I don't even know which one at this point), an extended partition which contained a swap partition for my ubuntu partition (I was meaning to move that ubuntu partition into the extended partition, never got around to it), and another partition (I don't remember how I formatted it). I should also mention this is a 1TB harddrive. So in short, I have a corrupted partition table on my primary harddrive from which I boot from, how can I fix this? I tried mounting the drive with sudo mount /dev/sda1 /media/ubuntu then I changed my directory to said folder and tried to list files and this monstrosity happened: $ ls ls: cannot access ??w?j^?.: Input/output error ls: cannot access ??(? ?x?.|: Input/output error ls: cannot access 6W_@?)?._??: Input/output error ls: cannot access HB0v???.A}?: Input/output error ls: cannot access ???.?X: Input/output error ls: cannot access t)?.+?l: Input/output error ls: cannot access ?h@ ?.@ : Input/output error ls: cannot access >? @??.???: Input/output error ls: cannot access m???.??: Input/output error ls: cannot access @ if??a?: Input/output error ls: cannot access ?M!vN$?.??n: Input/output error ls: cannot access ?o? ??.Bm`: Input/output error ls: cannot access ?:I??? M. : Input/output error ls: cannot access W??.??: Input/output error ls: cannot access ?: Input/output error ls: cannot access ?W?s??: Input/output error ls: cannot access ?v?k?.???: Input/output error ls: cannot access 5?$<N??: Input/output error .x????.??i: Input/output error ls: cannot access je????.j?1: Input/output error XjD?.???: Input/output error ls: cannot access W??n???.?: Input/output error ls: cannot access ?^x.$"?: Input/output error ls: cannot access !??*!??j.??: Input/output error ls: cannot access '-??k?^?.???: Input/output error ls: cannot access b?w?w?b.\??: Input/output error ls: cannot access o????"z.??B: Input/output error ls: cannot access ??b?h.?3-: Input/output error ls: cannot access ??.$7: Input/output error ls: cannot access )??K.bk: Input/output error ls: cannot access s??z?.?(?: Input/output error ls: cannot access ?F@?0?.@?: Input/output error .?D: Input/output error .??: Input/output error ls: cannot access?????. @: Input/output error ls: cannot access ?/?? ?.??: No such file or directory ls: cannot access rk?p4q(?.?k: Input/output error This looks promising. This is the output of fdisk -l $ sudo fdisk -l /dev/sda Warning: ignoring extra data in partition table 5 Warning: ignoring extra data in partition table 5 Warning: ignoring extra data in partition table 5 Warning: invalid flag 0x5208 of partition table 5 will be corrected by w(rite) Disk /dev/sda: 2199.0 GB, 2199023132672 bytes 255 heads, 63 sectors/track, 267349 cylinders, total 4294967056 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x44fdfe06 Device Boot Start End Blocks Id System /dev/sda1 113305600 894715903 390705152 c W95 FAT32 (LBA) /dev/sda2 894715904 1489307647 297295872 83 Linux /dev/sda3 1489309694 1497307135 3998721 5 Extended /dev/sda4 1497309184 1953523711 228107264 7 HPFS/NTFS/exFAT /dev/sda5 ? 3013257822 3688738171 337740175 aa Unknown

    Read the article

  • Error:couldn't read file-Kernel panic

    - by Thanos
    I have just installed ubuntu 12.04.1. To be honest I had to run installation several times until it was finished fine. When I finally managed to install it properly, I power on the laptop and the grub shoed up! I selected ubuntu generic. It takes some time to load and when it does I get an error message stating that error: couldn't read file Press any key to continue If I press any button nothing happens. If a leave it there, in a short while there is a black screen loading which gives some weird messages [0.946710] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block (0,0) [0.946755] Pid: 1, comm: swapper/0 Not tainted 3.2.0-29-generic #46-Ubuntu [0.946792] Call Trace: [0.946831] [<ffffffff81640ec8>] panic+0x91/0x1a4 [0.946869] [<ffffffff81cfc01e>] mount_block_root+0xdc/0x18e [0.946909] [<ffffffff81002930>] ? populate_rootfs_wait+0x300/0x9d0 [0.946947] [<ffffffff81cfc257>] mount_root+0x54/0x59 [0.946982] [<ffffffff81cfcec9>] prepare_namespace+0x16d/0x1a6 [0.947019] [<ffffffff81cfbd63>] kernel_init+0x153/0x158 [0.947094] [<ffffffff81cfbc10>] ? start_kernel+0x3bd/0x3bd [0.947129] [<ffffffff81664030>] ? gs_change+0x13/0x13 The thing is that the laptop isn't mine. A friend tried to dual boot ubuntu alongside windows 7 but he didn't succeed. Ubuntu option was in grub, but when you tried to boot it rebooted from the start. So from a Live CD I erased ubuntu, started windows to check if something went wrong, and fortunately everything was OK. Windows started normally! So I tried to install ubuntu. Before installation was completed the installer crashed! I was afraid that he would lost windows, something that was true... At that point I tried to install windows but whichever distro(XP, 7{home, proffessional, ultimate},8) I tried it could never reach the end. So I tried to reinstall ubuntu but I was facing those weird messages. What can I do to move on? ______________________________________________________________________ EDIT1: I tried to check and fix(if possible) with GParted it took a lot of hours,although gparted displays only 01:14, I restarted the system and now I get not exactly the same messages. Numbers in braces [ ] are different [0.818189] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block (0,0) [0.818235] Pid: 1, comm: swapper/0 Not tainted 3.2.0-29-generic #46-Ubuntu [0.818272] Call Trace: [0.818312] [<ffffffff81640ec8>] panic+0x91/0x1a4 [0.818351] [<ffffffff81cfc01e>] mount_block_root+0xdc/0x18e [0.818391] [<ffffffff81002930>] ? populate_rootfs_wait+0x300/0x9d0 [0.818428] [<ffffffff81cfc257>] mount_root+0x54/0x59 [0.818464] [<ffffffff81cfcec9>] prepare_namespace+0x16d/0x1a6 [0.818501] [<ffffffff81cfbd63>] kernel_init+0x153/0x158 [0.818574] [<ffffffff81cfbc10>] ? start_kernel+0x3bd/0x3bd [0.818610] [<ffffffff81664030>] ? gs_change+0x13/0x13 What on earth is going on? ______________________________________________________________________ EDIT2: I forgot to mention that my friend gave a punch to his laptop during a game. After that his cooler became to make a weird noise so I checked and it is a bit tortuous but it is working. What I beleive must be wrong is that his HDD makes a weird noise while trying to load ubuntu, which means he might need a new HDD. Could that be true?

    Read the article

  • Knowledge Pathways Designer - Recommended Settings

    - by ted.henson
    The General page of the Options dialog box contains the application preferences for Knowledge Pathways Designer. It is recommended that you leave certain settings as they are, unless you have a specific reason for changing them. The following are a few of the settings on the General page with an explanation of the recommended setting. They are in the order they appear on the page: Allow version 2.0 style links: This option should remain disabled unless you were using content that was created using version 2.0 of Knowledge Pathways and you want the same linking functionality that existed in that version 2.0. This feature enables you to reuse parts of titles that contain no AUs. However, keep in mind that this type of link is not a true link, but a cross between a copy and a link. To create a 2.0 style link, you drag and drop sections between titles. You can only create 2.0 style links to sections that belong to the Title AU. When creating a version 2.0 style link, your mouse pointer will change to indicate a 2.0 link is being created. Confirm deletion of outline items and Confirm deletion of titles: It is recommended that these options remain enabled to avoid deleting something by accident. Display tracking data loss warning when opening a published title: It recommended that this option be enabled so you will receive the warning message when you open the development copy of a title, reminding you of the implications of your changes. ulCopy files when converting a Section to an Assignable Unit: This option should remain enabled unless you have a specific reason for not copying the files. If this is disabled, you will (in effect) lose your content files upon converting because they will not be copied to the new AU directory on the content root. In this case, you would need to use Windows Explorer to copy your files manually. Working with Spelling Options All of the spelling options are enabled by default. Your design team can review these options to determine if you want to make changes, depending upon your specific needs. Understanding Dictionary Options You should leave the dictionary options as they are, unless you have a specific reason for changing them. While you can delete the user (customizable) dictionary, doing so is not recommended. Setting Check In/Check Out Options The ability to check in and check out titles and AUs will impact the efficiency of your design team. Decide what your check in and check out processes are before you start developing titles. The Check In/Check Out page of the Options dialog box contains two options that affect what happens when you open a title using the Open Title dialog box. Both of these options are enabled by default and are described below: Check Out for editing enabled: This option ensures that the Check Out for editing option will be selected when you open the development copy of a title from the Open Title dialog box. If this option is disabled, you must select the Check Out for editing option every time you want to check out a title for editing. Attempt to Check Out for entire branch: When this option is enabled, Designer checks out the selected title and all AUs and sections that are part of that title, provided they are available for check out. If this option is disabled, you will only check out the Title AU and anything that belongs to that Title AU (e.g., sections, questions, etc.), but not other AUs. The Check In/Check Out page of the Options dialog box also contains options that control what happens when you close a title. You can choose one option in the Check In when Closing a Title area. The option selected is a matter of preference and you should determine which option is most appropriate for your design team.

    Read the article

  • Direct3D11 and SharpDX - How to pass a model instance's world matrix as an input to a vertex shader

    - by Nathan Ridley
    Using Direct3D11, I'm trying to pass a matrix into my vertex shader from the instance buffer that is associated with a given model's vertices and I can't seem to construct my InputLayout without throwing an exception. The shader looks like this: cbuffer ConstantBuffer : register(b0) { matrix World; matrix View; matrix Projection; } struct VIn { float4 position: POSITION; matrix instance: INSTANCE; float4 color: COLOR; }; struct VOut { float4 position : SV_POSITION; float4 color : COLOR; }; VOut VShader(VIn input) { VOut output; output.position = mul(input.position, input.instance); output.position = mul(output.position, View); output.position = mul(output.position, Projection); output.color = input.color; return output; } The input layout looks like this: var elements = new[] { new InputElement("POSITION", 0, Format.R32G32B32_Float, 0, 0, InputClassification.PerVertexData, 0), new InputElement("INSTANCE", 0, Format.R32G32B32A32_Float, 0, 0, InputClassification.PerInstanceData, 1), new InputElement("COLOR", 0, Format.R32G32B32A32_Float, 12, 0) }; InputLayout = new InputLayout(device, signature, elements); The buffer initialization looks like this: public ModelDeviceData(Model model, Device device) { Model = model; var vertices = Helpers.CreateBuffer(device, BindFlags.VertexBuffer, model.Vertices); var instances = Helpers.CreateBuffer(device, BindFlags.VertexBuffer, Model.Instances.Select(m => m.WorldMatrix).ToArray()); VerticesBufferBinding = new VertexBufferBinding(vertices, Utilities.SizeOf<ColoredVertex>(), 0); InstancesBufferBinding = new VertexBufferBinding(instances, Utilities.SizeOf<Matrix>(), 0); IndicesBuffer = Helpers.CreateBuffer(device, BindFlags.IndexBuffer, model.Triangles); } The buffer creation helper method looks like this: public static Buffer CreateBuffer<T>(Device device, BindFlags bindFlags, params T[] items) where T : struct { var len = Utilities.SizeOf(items); var stream = new DataStream(len, true, true); foreach (var item in items) stream.Write(item); stream.Position = 0; var buffer = new Buffer(device, stream, len, ResourceUsage.Default, bindFlags, CpuAccessFlags.None, ResourceOptionFlags.None, 0); return buffer; } The line that instantiates the InputLayout object throws this exception: *HRESULT: [0x80070057], Module: [General], ApiCode: [E_INVALIDARG/Invalid Arguments], Message: The parameter is incorrect.* Note that the data for each model instance is simply an instance of SharpDX.Matrix. EDIT Based on Tordin's answer, it sems like I have to modify my code like so: var elements = new[] { new InputElement("POSITION", 0, Format.R32G32B32_Float, 0, 0, InputClassification.PerVertexData, 0), new InputElement("INSTANCE0", 0, Format.R32G32B32A32_Float, 0, 0, InputClassification.PerInstanceData, 1), new InputElement("INSTANCE1", 1, Format.R32G32B32A32_Float, 0, 0, InputClassification.PerInstanceData, 1), new InputElement("INSTANCE2", 2, Format.R32G32B32A32_Float, 0, 0, InputClassification.PerInstanceData, 1), new InputElement("INSTANCE3", 3, Format.R32G32B32A32_Float, 0, 0, InputClassification.PerInstanceData, 1), new InputElement("COLOR", 0, Format.R32G32B32A32_Float, 12, 0) }; and in the shader: struct VIn { float4 position: POSITION; float4 instance0: INSTANCE0; float4 instance1: INSTANCE1; float4 instance2: INSTANCE2; float4 instance3: INSTANCE3; float4 color: COLOR; }; VOut VShader(VIn input) { VOut output; matrix world = { input.instance0, input.instance1, input.instance2, input.instance3 }; output.position = mul(input.position, world); output.position = mul(output.position, View); output.position = mul(output.position, Projection); output.color = input.color; return output; } However I still get an exception.

    Read the article

  • SQL SERVER – Query Hint – Contest Win Joes 2 Pros Combo (USD 198) – Day 1 of 5

    - by pinaldave
    August 2011 we ran a contest where every day we give away one book for an entire month. The contest had extreme success. Lots of people participated and lots of give away. I have received lots of questions if we are doing something similar this month. Absolutely, instead of running a contest a month long we are doing something more interesting. We are giving away USD 198 worth gift every day for this week. We are giving away Joes 2 Pros 5 Volumes (BOOK) SQL 2008 Development Certification Training Kit every day. One copy in India and One in USA. Total 2 of the giveaway (worth USD 198). All the gifts are sponsored from the Koenig Training Solution and Joes 2 Pros. The books are available here Amazon | Flipkart | Indiaplaza How to Win: Read the Question Read the Hints Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India residents only) 2 Winners will be randomly selected announced on August 20th. Question of the Day: Which of the following queries will return dirty data? a) SELECT * FROM Table1 (READUNCOMMITED) b) SELECT * FROM Table1 (NOLOCK) c) SELECT * FROM Table1 (DIRTYREAD) d) SELECT * FROM Table1 (MYLOCK) Query Hints: BIG HINT POST Most SQL people know what a “Dirty Record” is. You might also call that an “Intermediate record”. In case this is new to you here is a very quick explanation. The simplest way to describe the steps of a transaction is to use an example of updating an existing record into a table. When the insert runs, SQL Server gets the data from storage, such as a hard drive, and loads it into memory and your CPU. The data in memory is changed and then saved to the storage device. Finally, a message is sent confirming the rows that were affected. For a very short period of time the update takes the data and puts it into memory (an intermediate state), not a permanent state. For every data change to a table there is a brief moment where the change is made in the intermediate state, but is not committed. During this time, any other DML statement needing that data waits until the lock is released. This is a safety feature so that SQL Server evaluates only official data. For every data change to a table there is a brief moment where the change is made in this intermediate state, but is not committed. During this time, any other DML statement (SELECT, INSERT, DELETE, UPDATE) needing that data must wait until the lock is released. This is a safety feature put in place so that SQL Server evaluates only official data. Additional Hints: I have previously discussed various concepts from SQL Server Joes 2 Pros Volume 1. SQL Joes 2 Pros Development Series – Dirty Records and Table Hints SQL Joes 2 Pros Development Series – Row Constructors SQL Joes 2 Pros Development Series – Finding un-matching Records SQL Joes 2 Pros Development Series – Efficient Query Writing Strategy SQL Joes 2 Pros Development Series – Finding Apostrophes in String and Text SQL Joes 2 Pros Development Series – Wildcard – Querying Special Characters SQL Joes 2 Pros Development Series – Wildcard Basics Recap Next Step: Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India) Bonus Winner Leave a comment with your favorite article from the “additional hints” section and you may be eligible for surprise gift. There is no country restriction for this Bonus Contest. Do mention why you liked it any particular blog post and I will announce the winner of the same along with the main contest. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 806 807 808 809 810 811 812 813 814 815 816 817  | Next Page >