Search Results

Search found 6992 results on 280 pages for 'exist'.

Page 190/280 | < Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >

  • What changes were made to a document

    - by Daniel Moth
    Part of my job is writing functional specs. Due to the inevitable iterative and incremental nature of software design/development, these specs need to be updated with additions/deletions/changes over a period of time. When the time comes for a developer to implement features or update their design document (or a tester to test the feature or update their test specs) they need to be doing that against the latest spec. The problem is that if they have reviewed this document already, they need a quick way to find the delta from the last time they reviewed it to see what changes exist and how their existing plans may be affected (instead of having to read the entire document again). Doing that is very easy assuming your Word documents are hosted on SharePoint. 1. Every time you review a document note the SharePoint version and/or date (if it is a printed copy, make sure your printout includes the date in the footer – all my specs do) 2. When you need to see what changed, open the document (make sure you are not using a cached or local offline copy) and on the ribbon go to the "Review" tab and then  click on the "Compare" button. 3. Click on the "Specific Version…" option. In the dialog that pops up pick the last version you reviewed and click the "Compare" button. [TIP for authors: before checkin of your document, always compare against the "Last Version" on the SharePoint so you can add appropriate more complete check in comments] 4. What you see now is that in addition to the document you have open, two other documents just opened up. One is in the background (flashing on your task bar) – close that one as it is the old version. 5. The other document is in the foreground and contains all the changes between the old version and the latest one. Be sure not to make edits to this document, use it only for reading the changes. To find all the changes, on the ribbon under the "Review" tab, click on the "Reviewing Pane" to open the reviewing pane on the left. You can now click on each pink change to see what it is. 6. When you are done reviewing changes close the document and don't save any changes (remember if you want to make edits/additions/comments make them in the original document which is still open). And now I have a URL to point to people that keep asking about this – enjoy  :-) Comments about this post welcome at the original blog.

    Read the article

  • Repository/Updating/Upgrading Issue

    - by Jakob
    The other day I was asked to upgrade from 13.04 to 13.10, at the time I was busy and hit no. I can not upgrade/update at this point, I get (error -11) or a 404 in terminal. In the software updater I get 'failed to download repository information.' I have tried changing my "Download From" setting to "Best" to "Main" and even a few other countries. And in "Other Software" I have tried disabling packages, but doesn't seem to help what so ever. I have tried several of the other commands to try and fix it, such as -fix missing or sudo apt-get update clean. P.S. This has also affected my thunderbird client, I cannot send/receive emails. Here is my error log when trying to upgrade: jakob@Skeletor:~$ sudo update-manager -d gpg: /tmp/tmpvejqvl/trustdb.gpg: trustdb created gpg: /tmp/tmpnayby6/trustdb.gpg: trustdb created Traceback (most recent call last): File "/usr/lib/python3/dist-packages/defer/__init__.py", line 483, in _inline_callbacks result = gen.throw(excep) File "/usr/lib/python3/dist-packages/UpdateManager/backend/InstallBackendAptdaemon.py", line 86, in commit True, close_on_done) File "/usr/lib/python3/dist-packages/defer/__init__.py", line 483, in _inline_callbacks result = gen.throw(excep) File "/usr/lib/python3/dist-packages/UpdateManager/backend/InstallBackendAptdaemon.py", line 158, in _run_in_dialog yield trans.run() aptdaemon.errors.TransactionFailed: Transaction failed: Package does not exist Package linux-headers-3.8.0-33 isn't available gpg: /tmp/tmp3kw_hl/trustdb.gpg: trustdb created. And let me throw in my sudo apt-get update too. Which this has been working variably too, but I don't know what to change my repositories to, and disabling does not effect: E: Some index files failed to download. They have been ignored, or old ones used instead. This is the short version, but looks exactly like this fairly consistently. Sometimes it downloads, sometimes it doesn't. Sometimes it tells me I have an update, and doesn't do anything. If it helps, I have recently had issues trying to install Samba as well, and connecting to the office's NAS Drive. Which works now, but I had to edit /etc/fstab/ and a few other things trying to get that to work as well. I understand it could also be a DNS problem, but this has been going on for a few days, as well as I've already tried changing my DNS server via my computer, however I am not allowed to alter the DNS on our company's router.

    Read the article

  • Mixed Solaris 10 and 11 versions in logical domains on the same server

    - by jsavit
    One question that comes up frequently is whether you can mix Solaris 10 and Solaris 11 in different logical domains under Oracle VM Server for SPARC. The answer is yes depending only on the system software requirements for the underlying hardware platform. Different versions of Solaris 10 and 11 can exist side-by-side on the same server and can act as control, service, I/O or guest domains subject only to the minimum software levels documented in the System Requirements section of the Oracle VM Server for SPARC Release Notes. Here's an example just taken from a running system. First, here's the control domain, which is running Solaris 10. I've highlighted a guest running Solaris 11. # uname -a SunOS atl-sewr-24 5.10 Generic_147440-01 sun4v sparc SUNW,SPARC-Enterprise-T5220 # ldm -V Logical Domains Manager (v 2.1) Hypervisor control protocol v 1.7 Using Hypervisor MD v 1.3 System PROM: Hypervisor v. 1.10.0 @(#)Hypervisor 1.10.0 2011/04/27 16:19\015 # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 1.6% 120d 17h atl-sewr-pool-148 active -n---- 5001 8 2G 0.1% 119d 21h atl-sewr-pool-152 active -n---- 5000 8 4G 0.2% 112d 19h atl-sewr-pool-154 active -n---- 5002 8 2G 0.1% 120d 15h atl-sewr-pool-155 active -n---- 5003 16 2G 0.0% 26d 14h 30m This system is running Oracle VM Server 2.1 with a Solaris 10 control domain. Hmm, I should update this machine to 2.2 when I get a few free moments. Upgrading is very straightforward. Here's a display logging into the highlighted guest: Last login: Mon May 21 10:18:16 2012 from dhcp-adc-twvpn- Oracle Corporation SunOS 5.11 11.0 November 2011 sewr@atl-sewr-pool-152:~$ uname -a SunOS atl-sewr-pool-152 5.11 11.0 sun4v sparc SUNW,SPARC-Enterprise-T5220 sewr@atl-sewr-pool-152:~$ cat /etc/release Oracle Solaris 11 11/11 SPARC Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved. Assembled 18 October 2011 sewr@atl-sewr-pool-152:~$ sudo virtinfo -ct Password: Domain role: LDoms guest Control domain: atl-sewr-24 sewr@atl-sewr-pool-152:~$ That's running the GA version of Solaris 11, so I probably should update that some time too. Note the use of the virtinfo -ct command that lets the guest get information about the hosting environment. Summary You can mix and match versions of Solaris in logical domains. All the different combinations work: Solaris 10 and/or Solaris 11 control and service domains with Solaris 10 and/or Solaris 11 guests. Mixing different guest OS levels on the same server is one of the traditional reasons for using virtual machines in the first place since virtual machines were invented some 40 years ago, used to run production and test systems in parallel while upgrading OS levels. This can easily be done with Oracle VM Server for SPARC (Logical Domains).

    Read the article

  • Using the jQuery UI Library in a MVC 3 Application to Build a Dialog Form

    - by ChrisD
    Using a simulated dialog window is a nice way to handle inline data editing. The jQuery UI has a UI widget for a dialog window that makes it easy to get up and running with it in your application. With the release of ASP.NET MVC 3, Microsoft included the jQuery UI scripts and files in the MVC 3 project templates for Visual Studio. With the release of the MVC 3 Tools Update, Microsoft implemented the inclusion of those with NuGet as packages. That means we can get up and running using the latest version of the jQuery UI with minimal effort. To the code! Another that might interested you about JQuery Mobile and ASP.NET MVC 3 with C#. If you are starting with a new MVC 3 application and have the Tools Update then you are a NuGet update and a <link> and <script> tag away from adding the jQuery UI to your project. If you are using an existing MVC project you can still get the jQuery UI library added to your project via NuGet and then add the link and script tags. Assuming that you have pulled down the latest version (at the time of this publish it was 1.8.13) you can add the following link and script tags to your <head> tag: < link href = "@Url.Content(" ~ / Content / themes / base / jquery . ui . all . css ")" rel = "Stylesheet" type = "text/css" /> < script src = "@Url.Content(" ~ / Scripts / jquery-ui-1 . 8 . 13 . min . js ")" type = "text/javascript" ></ script > The jQuery UI library relies upon the CSS scripts and some image files to handle rendering of its widgets (you can choose a different theme or role your own if you like). Adding these to the stock _Layout.cshtml file results in the following markup: <!DOCTYPE html> < html > < head >     < meta charset = "utf-8" />     < title > @ViewBag.Title </ title >     < link href = "@Url.Content(" ~ / Content / Site . css ")" rel = "stylesheet" type = "text/css" />     <link href="@Url.Content("~/Content/themes/base/jquery.ui.all.css")" rel="Stylesheet" type="text/css" />     <script src="@Url.Content("~/Scripts/jquery-1.5.1.min.js")" type="text/javascript"></script>     <script src="@Url.Content("~/Scripts/modernizr-1.7.min . js ")" type = "text/javascript" ></ script >     < script src = "@Url.Content(" ~ / Scripts / jquery-ui-1 . 8 . 13 . min . js ")" type = "text/javascript" ></ script > </ head > < body >     @RenderBody() </ body > </ html > Our example will involve building a list of notes with an id, title and description. Each note can be edited and new notes can be added. The user will never have to leave the single page of notes to manage the note data. The add and edit forms will be delivered in a jQuery UI dialog widget and the note list content will get reloaded via an AJAX call after each change to the list. To begin, we need to craft a model and a data management class. We will do this so we can simulate data storage and get a feel for the workflow of the user experience. The first class named Note will have properties to represent our data model. namespace Website . Models {     public class Note     {         public int Id { get ; set ; }         public string Title { get ; set ; }         public string Body { get ; set ; }     } } The second class named NoteManager will be used to set up our simulated data storage and provide methods for querying and updating the data. We will take a look at the class content as a whole and then walk through each method after. using System . Collections . ObjectModel ; using System . Linq ; using System . Web ; namespace Website . Models {     public class NoteManager     {         public Collection < Note > Notes         {             get             {                 if ( HttpRuntime . Cache [ "Notes" ] == null )                     this . loadInitialData ();                 return ( Collection < Note >) HttpRuntime . Cache [ "Notes" ];             }         }         private void loadInitialData ()         {             var notes = new Collection < Note >();             notes . Add ( new Note                           {                               Id = 1 ,                               Title = "Set DVR for Sunday" ,                               Body = "Don't forget to record Game of Thrones!"                           });             notes . Add ( new Note                           {                               Id = 2 ,                               Title = "Read MVC article" ,                               Body = "Check out the new iwantmymvc.com post"                           });             notes . Add ( new Note                           {                               Id = 3 ,                               Title = "Pick up kid" ,                               Body = "Daughter out of school at 1:30pm on Thursday. Don't forget!"                           });             notes . Add ( new Note                           {                               Id = 4 ,                               Title = "Paint" ,                               Body = "Finish the 2nd coat in the bathroom"                           });             HttpRuntime . Cache [ "Notes" ] = notes ;         }         public Collection < Note > GetAll ()         {             return Notes ;         }         public Note GetById ( int id )         {             return Notes . Where ( i => i . Id == id ). FirstOrDefault ();         }         public int Save ( Note item )         {             if ( item . Id <= 0 )                 return saveAsNew ( item );             var existingNote = Notes . Where ( i => i . Id == item . Id ). FirstOrDefault ();             existingNote . Title = item . Title ;             existingNote . Body = item . Body ;             return existingNote . Id ;         }         private int saveAsNew ( Note item )         {             item . Id = Notes . Count + 1 ;             Notes . Add ( item );             return item . Id ;         }     } } The class has a property named Notes that is read only and handles instantiating a collection of Note objects in the runtime cache if it doesn't exist, and then returns the collection from the cache. This property is there to give us a simulated storage so that we didn't have to add a full blown database (beyond the scope of this post). The private method loadInitialData handles pre-filling the collection of Note objects with some initial data and stuffs them into the cache. Both of these chunks of code would be refactored out with a move to a real means of data storage. The GetAll and GetById methods access our simulated data storage to return all of our notes or a specific note by id. The Save method takes in a Note object, checks to see if it has an Id less than or equal to zero (we assume that an Id that is not greater than zero represents a note that is new) and if so, calls the private method saveAsNew . If the Note item sent in has an Id , the code finds that Note in the simulated storage, updates the Title and Description , and returns the Id value. The saveAsNew method sets the Id , adds it to the simulated storage, and returns the Id value. The increment of the Id is simulated here by getting the current count of the note collection and adding 1 to it. The setting of the Id is the only other chunk of code that would be refactored out when moving to a different data storage approach. With our model and data manager code in place we can turn our attention to the controller and views. We can do all of our work in a single controller. If we use a HomeController , we can add an action method named Index that will return our main view. An action method named List will get all of our Note objects from our manager and return a partial view. We will use some jQuery to make an AJAX call to that action method and update our main view with the partial view content returned. Since the jQuery AJAX call will cache the call to the content in Internet Explorer by default (a setting in jQuery), we will decorate the List, Create and Edit action methods with the OutputCache attribute and a duration of 0. This will send the no-cache flag back in the header of the content to the browser and jQuery will pick that up and not cache the AJAX call. The Create action method instantiates a new Note model object and returns a partial view, specifying the NoteForm.cshtml view file and passing in the model. The NoteForm view is used for the add and edit functionality. The Edit action method takes in the Id of the note to be edited, loads the Note model object based on that Id , and does the same return of the partial view as the Create method. The Save method takes in the posted Note object and sends it to the manager to save. It is decorated with the HttpPost attribute to ensure that it will only be available via a POST. It returns a Json object with a property named Success that can be used by the UX to verify everything went well (we won't use that in our example). Both the add and edit actions in the UX will post to the Save action method, allowing us to reduce the amount of unique jQuery we need to write in our view. The contents of the HomeController.cs file: using System . Web . Mvc ; using Website . Models ; namespace Website . Controllers {     public class HomeController : Controller     {         public ActionResult Index ()         {             return View ();         }         [ OutputCache ( Duration = 0 )]         public ActionResult List ()         {             var manager = new NoteManager ();             var model = manager . GetAll ();             return PartialView ( model );         }         [ OutputCache ( Duration = 0 )]         public ActionResult Create ()         {             var model = new Note ();             return PartialView ( "NoteForm" , model );         }         [ OutputCache ( Duration = 0 )]         public ActionResult Edit ( int id )         {             var manager = new NoteManager ();             var model = manager . GetById ( id );             return PartialView ( "NoteForm" , model );         }         [ HttpPost ]         public JsonResult Save ( Note note )         {             var manager = new NoteManager ();             var noteId = manager . Save ( note );             return Json ( new { Success = noteId > 0 });         }     } } The view for the note form, NoteForm.cshtml , looks like so: @model Website . Models . Note @using ( Html . BeginForm ( "Save" , "Home" , FormMethod . Post , new { id = "NoteForm" })) { @Html . Hidden ( "Id" ) < label class = "Title" >     < span > Title < /span><br / >     @Html . TextBox ( "Title" ) < /label> <label class="Body">     <span>Body</ span >< br />     @Html . TextArea ( "Body" ) < /label> } It is a strongly typed view for our Note model class. We give the <form> element an id attribute so that we can reference it via jQuery. The <label> and <span> tags give our UX some structure that we can style with some CSS. The List.cshtml view is used to render out a <ul> element with all of our notes. @model IEnumerable < Website . Models . Note > < ul class = "NotesList" >     @foreach ( var note in Model )     {     < li >         @note . Title < br />         @note . Body < br />         < span class = "EditLink ButtonLink" noteid = "@note.Id" > Edit < /span>     </ li >     } < /ul> This view is strongly typed as well. It includes a <span> tag that we will use as an edit button. We add a custom attribute named noteid to the <span> tag that we can use in our jQuery to identify the Id of the note object we want to edit. The view, Index.cshtml , contains a bit of html block structure and all of our jQuery logic code. @ {     ViewBag . Title = "Index" ; } < h2 > Notes < /h2> <div id="NoteListBlock"></ div > < span class = "AddLink ButtonLink" > Add New Note < /span> <div id="NoteDialog" title="" class="Hidden"></ div > < script type = "text/javascript" >     $ ( function () {         $ ( "#NoteDialog" ). dialog ({             autoOpen : false , width : 400 , height : 330 , modal : true ,             buttons : {                 "Save" : function () {                     $ . post ( "/Home/Save" ,                         $ ( "#NoteForm" ). serialize (),                         function () {                             $ ( "#NoteDialog" ). dialog ( "close" );                             LoadList ();                         });                 },                 Cancel : function () { $ ( this ). dialog ( "close" ); }             }         });         $ ( ".EditLink" ). live ( "click" , function () {             var id = $ ( this ). attr ( "noteid" );             $ ( "#NoteDialog" ). html ( "" )                 . dialog ( "option" , "title" , "Edit Note" )                 . load ( "/Home/Edit/" + id , function () { $ ( "#NoteDialog" ). dialog ( "open" ); });         });         $ ( ".AddLink" ). click ( function () {             $ ( "#NoteDialog" ). html ( "" )                 . dialog ( "option" , "title" , "Add Note" )                 . load ( "/Home/Create" , function () { $ ( "#NoteDialog" ). dialog ( "open" ); });         });         LoadList ();     });     function LoadList () {         $ ( "#NoteListBlock" ). load ( "/Home/List" );     } < /script> The <div> tag with the id attribute of "NoteListBlock" is used as a container target for the load of the partial view content of our List action method. It starts out empty and will get loaded with content via jQuery once the DOM is loaded. The <div> tag with the id attribute of "NoteDialog" is the element for our dialog widget. The jQuery UI library will use the title attribute for the text in the dialog widget top header bar. We start out with it empty here and will dynamically change the text via jQuery based on the request to either add or edit a note. This <div> tag is given a CSS class named "Hidden" that will set the display:none style on the element. Since our call to the jQuery UI method to make the element a dialog widget will occur in the jQuery document ready code block, the end user will see the <div> element rendered in their browser as the page renders and then it will hide after that jQuery call. Adding the display:hidden to the <div> element via CSS will ensure that it is never rendered until the user triggers the request to open the dialog. The jQuery document load block contains the setup for the dialog node, click event bindings for the edit and add links, and a call to a JavaScript function called LoadList that handles the AJAX call to the List action method. The .dialog() method is called on the "NoteDialog" <div> element and the options are set for the dialog widget. The buttons option defines 2 buttons and their click actions. The first is the "Save" button (the text in quotations is used as the text for the button) that will do an AJAX post to our Save action method and send the serialized form data from the note form (targeted with the id attribute "NoteForm"). Upon completion it will close the dialog widget and call the LoadList to update the UX without a redirect. The "Cancel" button simply closes the dialog widget. The .live() method handles binding a function to the "click" event on all elements with the CSS class named EditLink . We use the .live() method because it will catch and bind our function to elements even as the DOM changes. Since we will be constantly changing the note list as we add and edit we want to ensure that the edit links get wired up with click events. The function for the click event on the edit links gets the noteid attribute and stores it in a local variable. Then it clears out the HTML in the dialog element (to ensure a fresh start), calls the .dialog() method and sets the "title" option (this sets the title attribute value), and then calls the .load() AJAX method to hit our Edit action method and inject the returned content into the "NoteDialog" <div> element. Once the .load() method is complete it opens the dialog widget. The click event binding for the add link is similar to the edit, only we don't need to get the id value and we load the Create action method. This binding is done via the .click() method because it will only be bound on the initial load of the page. The add button will always exist. Finally, we toss in some CSS in the Content/Site.css file to style our form and the add/edit links. . ButtonLink { color : Blue ; cursor : pointer ; } . ButtonLink : hover { text - decoration : underline ; } . Hidden { display : none ; } #NoteForm label { display:block; margin-bottom:6px; } #NoteForm label > span { font-weight:bold; } #NoteForm input[type=text] { width:350px; } #NoteForm textarea { width:350px; height:80px; } With all of our code in place we can do an F5 and see our list of notes: If we click on an edit link we will get the dialog widget with the correct note data loaded: And if we click on the add new note link we will get the dialog widget with the empty form: The end result of our solution tree for our sample:

    Read the article

  • How much freedom should a programmer have in choosing a language and framework?

    - by Spencer
    I started working at a company that is primarily a C# oriented. We have a few people who like Java and JRuby, but a majority of programmers here like C#. I was hired because I have a lot of experience building web applications and because I lean towards newer technologies like JRuby on Rails or nodejs. I have recently started on a project building a web application with a focus on getting a lot of stuff done in a short amount of time. The software lead has dictated that I use mvc4 instead of rails. That might be OK, except I don't know mvc4, I don't know C# and I am the only one responsible for creating the web application server and front-end UI. Wouldn't it make sense to use a framework that I already know extremely well (Rails) instead of using mvc4? The two reasons behind the decision was that the tech lead doesn't know Jruby/rails and there would be no way to reuse the code. Counter arguments: He won't be contributing to the code and is frankly, not needed on this project. So, it doesn't really matter if he knows JRuby/rails or not. We actually can reuse the code since we have a lot of java apps that JRuby can pull code from and vice-versa. In fact, he has dedicated some resources to convert a Java library to C#, instead of just running the Java library on the JRuby on Rails app. All because he doesn't like Java or JRuby I have built many web applications, but using something unfamiliar is causing some spin-up and I am unable to build an awesome application in as short of a time that I'm used to. This would be fine, learning new technologies is important in this field. The problem is, for this project, we need to get a lot done in a short period of time. At what point should a developer be allowed to choose his tools? Is this dependent on the company? Does my company suck or is this considered normal? Do greener pastures exist? Am I looking at this the wrong way? Bonus: Should I just keep my head down and move along at a snails pace, or defy orders and go with what I know in order to make this project more successful? Edit: I had actually created a fully function rails application (on my own time) and showed it to the team and it did not seem to matter. I am currently porting it to mvc4 (slowly).

    Read the article

  • Why no more macro languages?

    - by Muhammad Alkarouri
    In this answer to a previous question of mine about scripting languages suitability as shells, DigitalRoss identifies the difference between the macro languages and the "parsed typed" languages in terms of string treatment as the main reason that scripting languages are not suitable for shell purposes. Macro languages include nroff and m4 for example. What are the design decisions (or compromises) needed to create a macro programming language? And why are most of the mainstream languages parsed rather than macro? This very similar question (and the accepted answer) covers fairly well why the parsed typed languages, take C for example, suffer from the use of macros. I believe my question here covers different grounds: Macro languages or those working on a textual level are not wholly failures. Arguably, they include bash, Tcl and other shell languages. And they work in a specific niche such as shells as explained in my links above. Even m4 had a fairly long time of success, and some of the web template languages can be regarded as macro languages. It is quite possible that macros and parsed typing do not go well together and that is why macros "break" common languages. In the answer to the linked question, a macro like #define TWO 1+1 would have been covered by the common rules of the language rather than conflicting with those of the host language. And issues like "macros are not typed" and "code doesn't compile" are not relevant in the context of a language designed as untyped and interpreted with little concern for efficiency. The question about the design decisions needed to create a macro language pertain to a hobby project which I am currently working on on designing a new shell. Taking the previous question in context would clarify the difference between adding macros to a parsed language and my objective. I hope the clarification shows that the question linked doesn't cover this question, which is two parts: If I want to create a macro language (for a shell or a web template, for example), what limitations and compromises (and guidelines, if exist) need to be done? (Probably answerable by a link or reference) Why have no macro languages succeed in becoming mainstream except in particular niches? What makes typed languages successful in large programming, while "stringly-typed" languages succeed in shells and one-liner like environments?

    Read the article

  • Enterprise 2.0 Conference recap

    - by kellsey.ruppel
    We had a great week in Boston attending the Enterprise 2.0 Conference. We learned a lot from industry thought leaders and had a chance to speak with a lot of different folks about social and collaboration technologies and trends.  Of all the conferences we attend, this one definitely has a different “feel”. It seems like the attendees are younger, they dress hipper, and there is much more livelihood all around. A few of the sessions addressed this, as the "millenials" or Generation Y, have been using Web 2.0 tools, such as Facebook and Twitter for many years now, and as they are entering the workforce they are expecting similar tools to be a part of how they accomplish their job tasks. It's important to note that it's not just Millenials that are expecting these technologies, as workers young and old alike benefit from social and collaboration tools. I’ve highlighted some of the takeaways I had, as well as a reaction from John Brunswick, who helped us in staffing the booth. Giving your employees choices is empowering, but if there is no course of action or plan, it’s useless. There is no such thing as collaboration without a goal. In a few years, social will become a feature in the “platform”, a component of collaboration. Social will become part of the norm – just like email is expected when you start a job at a company, Social will be too. 1 in 3 of your employees are using tools your company doesn't sanction (how scary is this?!) 25,000 pieces of content are created every second. Context is king. Social tools help us navigate and manage the complexities we face with information overload. We need to design products for the way people work. Consumerization of the enterprise - bringing social tools like Facebook to the organization. From John Brunswick: "The conference had solid attendance, standing as a testament to organizations making a concerted effort to understand what social tools exist to support their businesses.  Many vendors were narrowly focused and people we pleasantly surprised at the breadth of capability provided by Oracle WebCenter.  People seemed to feel that it just made sense that social technology provides the most benefit when presented in the context of key business data." Did you attend the conference? What were some of your key takeaways?

    Read the article

  • VirtualBox 4.2.14 is now available

    - by user12611829
    The VirtualBox development team has just released version 4.2.14, and it is now available for download. This is a maintenance release for version 4.2 and contains quite a few fixes. Here is the list from the official Changelog. VMM: another TLB invalidation fix for non-present pages VMM: fixed a performance regression (4.2.8 regression; bug #11674) GUI: fixed a crash on shutdown GUI: prevent stuck keys under certain conditions on Windows hosts (bugs #2613, #6171) VRDP: fixed a rare crash on the guest screen resize VRDP: allow to change VRDP parameters (including enabling/disabling the server) if the VM is paused USB: fixed passing through devices on Mac OS X host to a VM with 2 or more virtual CPUs (bug #7462) USB: fixed hang during isochronous transfer with certain devices (4.1 regression; Windows hosts only; bug #11839) USB: properly handle orphaned URBs (bug #11207) BIOS: fixed function for returning the PCI interrupt routing table (fixes NetWare 6.x guests) BIOS: don't use the ENTER / LEAVE instructions in the BIOS as these don't work in the real mode as set up by certain guests (e.g. Plan 9 and QNX 4) DMI: allow to configure DmiChassisType (bug #11832) Storage: fixed lost writes if iSCSI is used with snapshots and asynchronous I/O (bug #11479) Storage: fixed accessing certain VHDX images created by Windows 8 (bug #11502) Storage: fixed hang when creating a snapshot using Parallels disk images (bug #9617) 3D: seamless + 3D fixes (bug #11723) 3D: version 4.2.12 was not able to read saved states of older versions under certain conditions (bug #11718) Main/Properties: don't create a guest property for non-running VMs if the property does not exist and is about to be removed (bug #11765) Main/Properties: don't forget to make new guest properties persistent after the VM was terminated (bug #11719) Main/Display: don't lose seamless regions during screen resize Main/OVF: don't crash during import if the client forgot to call Appliance::interpret() (bug #10845) Main/OVF: don't create invalid appliances by stripping the file name if the VM name is very long (bug #11814) Main/OVF: don't fail if the appliance contains multiple file references (bug #10689) Main/Metrics: fixed Solaris file descriptor leak Settings: limit depth of snapshot tree to 250 levels, as more will lead to decreased performance and may trigger crashes VBoxManage: fixed setting the parent UUID on diff images using sethdparentuuid Linux hosts: work around for not crashing as a result of automatic NUMA balancing which was introduced in Linux 3.8 (bug #11610) Windows installer: force the installation of the public certificate in background (i.e. completely prevent user interaction) if the --silent command line option is specified Windows Additions: fixed problems with partial install in the unattended case Windows Additions: fixed display glitch with the Start button in seamless mode for some themes Windows Additions: Seamless mode and auto-resize fixes Windows Additions: fixed trying to to retrieve new auto-logon credentials if current ones were not processed yet Windows Additions installer: added the /with_wddm switch to select the experimental WDDM driver by default Linux Additions: fixed setting own timed out and aborted texts in information label of the lightdm greeter Linux Additions: fixed compilation against Linux 3.2.0 Ubuntu kernels (4.2.12 regression as a side effect of the Debian kernel build fix; bug #11709) X11 Additions: reduced the CPU load of VBoxClient in drag'and'drop mode OS/2 Additions: made the mouse wheel work (bug #6793) Guest Additions: fixed problems copying and pasting between two guests on an X11 host (bug #11792) The full changelog can be found here. You can download binaries for Solaris, Linux, Windows and MacOS hosts at http://www.virtualbox.org/wiki/Downloads Technocrati Tags: Oracle Virtualization VirtualBox

    Read the article

  • Development-led security vs administration-led security in a software product?

    - by haylem
    There are cases where you have the opportunity, as a developer, to enforce stricter security features and protections on a software, though they could very well be managed at an environmental level (ie, the operating system would take care of it). Where would you say you draw the line, and what elements do you factor in your decision? Concrete Examples User Management is the OS's responsibility Not exactly meant as a security feature, but in a similar case Google Chrome used to not allow separate profiles. The invoked reason (though it now supports multiple profiles for a same OS user) used to be that user management was the operating system's responsibility. Disabling Web-Form Fields A recurrent request I see addressed online is to have auto-completion be disabled on form fields. Auto-completion didn't exist in old browsers, and was a welcome feature at the time it was introduced for people who needed to fill in forms often. But it also brought in some security concerns, and so some browsers started to implement, on top of the (obviously needed) setting in their own preference/customization panel, an autocomplete attribute for form or input fields. And this has now been introduced into the upcoming HTML5 standard. For browsers that do not listen to this attribute, strange hacks* are offered, like generating unique IDs and names for fields to avoid them from being suggested in future forms (which comes with another herd of issues, like polluting your local auto-fill cache and not preventing a password from being stored in it, but instead probably duplicating its occurences). In this particular case, and others, I'd argue that this is a user setting and that it's the user's desire and the user's responsibility to enable or disable auto-fill (by disabling the feature altogether). And if it is based on an internal policy and security requirement in a corporate environment, then substitute the user for the administrator in the above. I assume it could be counter-argued that the user may want to access non-critical applications (or sites) with this handy feature enabled, and critical applications with this feature disabled. But then I'd think that's what security zones are for (in some browsers), or the sign that you need a more secure (and dedicated) environment / account to use these applications. * I obviously don't deny the ingeniosity of the people who were forced to find workarounds, just the necessity of said workarounds. Questions That was a tad long-winded, so I guess my questions are: Would you in general consider it to be the application's (hence, the developer's) responsiblity? Where do you draw the line, if not in the "general" case?

    Read the article

  • Database Mirroring – deprecated

    - by fatherjack
    Do you use mirroring on any of your databases? Do you use mirroring on SQL Server Standard Edition? I do, as a way of having a stand-by server ready to take over if there is a problem with the live server so that business can continue despite whatever disaster may strike at our primary server location. In my experience it has been a great solution for us as it is simple to implement, reliable and predictable. Mirroring has been around since SQL Server 2005 sp1 but with the release of SQL Server 2012 mirroring has now been placed on the deprecation list. That’s right, Microsoft are removing this feature from SQL Server. SQL Server 2012 had lots of improvements and new features around this sort of technology – the High Availability, Disaster recovery and Always On features described in detail here by Brent Ozar and  Microsoft’s own Customer Service and Support SQL Server Engineers . Now the bad news, the HADRON features are pretty much all wrapped up in the Enterprise Edition of SQL Server 2012. This is going to be a big issue for people, like me, who are only on Standard Edition of earlier versions mostly due to our requirements and the budget (or lack thereof) required for Enterprise Edition licenses. No mirroring in Standard Edition means no upgrade. Don’t Panic. There are two stages of deprecation and they dont happen fast. The first stage – Deprecation Announcement- means that Microsoft have decided that there is a limited future for a particular feature and this is your cue that new projects and developments should not be implemented on this technology as it will cease to exist in the future. This is where mirroring currently stands. You have time to consider your options and start work on planning how you will move away from using this feature. This can be 2 or 3 versions of SQL Server, possibly more. The next stage is Deprecation Final Support - this is where you are on your last chance, When you see this then the next version of SQL Server will not have this feature in it so you need to implement your plans to move to an alternative solution. While these two phases are taking place Microsoft are open to feedback on how people use their products and if enough people make the case for mirroring (or an equivalent technology) to be in the Standard Edition then they may make changes rather than lose customers or have customers cease upgrading in order to keep the functionality they need. Denny Cherry (@MrDenny) has published an article on this same topic here with more detail than me so I wont go over old ground. All I will say is that you should read his article now and then follow the link to his own site where he is collecting peoples information on how they use mirroring in Standard Edition so that our voice can be put to Microsoft.  

    Read the article

  • Wordpress Installation (on IIS and SQL Server)

    - by Davide Mauri
    To proceed with the installation of Wordpress on SQL Server and IIS, first of all, you need to do the following steps Create a database on SQL Server that will be used by Wordpress Create login that can access to the just created database and put the user into ddladmin, db_datareader, db_datawriter roles Download and unpack Wordpress 3.3.2 (latest version as of 27 May 2012) zip file into a directory of your choice Download the wp-db-abstraction 1.1.4 (latest version as of 27 May 2012) plugin from wordpress.org website Now that the basic action has been done, you can start to setup and configure your Wordpress installation. Unpack and follow the instructions in the README.TXT file to install the Database Abstraction Layer. Mainly you have to: Upload wp-db-abstraction.php and the wp-db-abstraction directory to wp-content/mu-plugins.  This should be parallel to your regular plugins directory.  If the mu-plugins directory does not exist, you must create it. Put the db.php file from inside the wp-db-abstraction.php directory to wp-content/db.php Now you can create an application pool in IIS like the following one Create a website, using the above Application Pool, that points to the folder where you unpacked Wordpress files. Be sure to give the “Write” permission to the IIS account, as pointed out in this (old, but still quite valid) installation manual: http://wordpress.visitmix.com/development/installing-wordpress-on-sql-server#iis Now you’re ready to go. Point your browser to the configured website and the Wordpress installation screen will be there for you. When you’re requested to enter information to connect to MySQL database, simply skip that page, leaving the default values. If you have installed the Database Abstraction Layer, another database installation screen will appear after the one used by MySQL, and here you can enter the configuration information needed to connect to SQL Server. After having finished the installation steps, you should be able to access and navigate your wordpress site.  A final touch, and it’s done: just add the needed rewrite rules http://wordpress.visitmix.com/development/installing-wordpress-on-sql-server#urlrewrite and that’s it! Well. Not really. Unfortunately the current (as of 27 May 2012) version of the Database Abstraction Layer (1.1.4) has some bugs. Luckily they can be quickly fixed: Backslash Fix http://wordpress.org/support/topic/plugin-wp-db-abstraction-fix-problems-with-backslash-usage Select Top 0 Fix Make the change to the file “.\wp-content\mu-plugins\wp-db-abstraction\translations\sqlsrv\translations.php” suggested by “debettap”   http://sourceforge.net/tracker/?func=detail&aid=3485384&group_id=315685&atid=1328061 And now you have a 100% working Wordpress installation on SQL Server! Since I also wanted to take advantage of SQL Server Full Text Search, I’ve created a very simple wordpress plugin to setup full-text search and to use it as website search engine: http://wpfts.codeplex.com/ Enjoy!

    Read the article

  • Spotlight on RIVA: CRM integration for Oracle CRM on Demand and Microsoft Exchange

    - by Richard Lefebvre
    Introducing Riva from Omni - an Oracle ISV partner specializing in Enterprise Management and Integration Solutions Riva delivers advanced, server-side integration for Oracle CRM On Demand and Microsoft Exchange or even Novell GroupWise. Riva allows Oracle customers to go beyond the standard Outlook plug-in to deliver additional value for the end user as they interact between Outlook and CRM On Demand. Riva syncs CRM On Demand to ALL Exchange mail apps, not just Windows Outlook.  So, whether customers are using Outlook 2010, Outlook Web Access (web client), Outlook 2011 for Mac, Apple Mail, Outlook on Citrix  or a mobile device, Riva's got them covered. There are no plug-ins to be installed, configured, managed and maintained on users' desktops, laptops as Riva delivers Server-side synchronisation for CRMOD and Exchange. The automation of CRM and Outlook integration will remove the reliance upon users to synchronise between the two with Riva handling this process. Riva allows administrators to define sync policies and apply them to individuals or groups of users depending on their sync requirements. Administrators will be able to determine and manage the exposure of the most pertinent detail to be synchronised between Outlook and CRM On Demand. Custom and organic contact filtering for large deployments i.e. Based on ownership, groupings and contact frequency, filters can be applied on what contact records are shared with the users. Riva provides the capability to synchronise CRM and Outlook beyond Contacts, Calendar entries and Email. The synchronisation can be extended to cater for  opportunities, quotes and custom objects for example within the Outlook interface. Riva SmartConvert Folders can automate the creation of opportunities and associated contacts for example if they don't already exist. This can facilitate a reduction in manual detail entry through quick association whilst also benefiting user adoption. From a mobile perspective, Riva allows users to view and manage their CRM On Demand contacts, calendar, tasks, opportunities and cases from iPad, iPhone, Android and BlackBerry devices.  Again, there are no mobile apps or additional plugins to install, configure or manage. We sync CRM On Demand to Exchange.  Because the mobile device is connected to an Exchange mailbox, the information automatically syncs down to the native address book, calendar and mail apps on the smartphone or tablet. Riva Datasheet for CRM On Demand Riva Brochure – Oracle CRM On Demand  Technical Knowledgebase & Riva Trial  http://kb.omni-ts.com/47/ Comparison to Outlook Plug-ins Riva Diagram – Riva Comparison with Outlook Plug-ins Contact: Wolfgang Berger - [email protected]

    Read the article

  • Microsoft JScript runtime error in Visual studio 2010

    - by anirudha
    There is many tool exist to debug JavaScript visual studio , firebug and some other great plugin are one of them. here I show you solution for Microsoft JScript runtime error: Object doesn't support this property or method  if you search on Google for how to debug Javascript in Visual studio. all of them told you to follow this instruction :- go to Internet option in IE > advanced tab > Browsing section > uhcheck the both option disable script debugging for IE (internet explorer) disable script debugging (others). that’s the information you read are outdate or not true these days. in visual studio you can play with JavaScript debugging even these option is check or unchecked. off-course you  can try these step in express version of visual studio. I found a little problem that my code [based on jQuery plugin] try to call some function. some of them not implement in browser so they call other so that’s fine and work in every browser. when visual studio found some function they trying to call and not implemented in browser I use to debug they tell me Microsoft JScript runtime error: Object doesn't support this property or method  they tell me again whenever I put refresh [f5] in browser. so the thing they do you never like that. see error window first whenever code is not buggy and see everytime before see the page you want to see. this behavior harsh you. there is no problem whenever you not commonly used IE but whenever you really want to debugging you got some pain too from the behavior of this. well I have some patch for that if you really like the debugging in Visual studio with IE. so if you sure that code is not buggy or you really not want to see that’s window here is trick. when you debug the JavaScript in IE choose the compact mode and they never force you to see window first who tell the thing you not want to see. How to do that for reliefs from this pain in visual studio after debug the project or website IE gone automatically launch. go to Developer tool by pressing f12 > you see window something like this:- by the way they give you document mode IE 7 as default or browser mode based on your settings. first thing is that you need to set the compact view [any ] in browser mode. and next time the error window never come again who tell you Microsoft JScript runtime error: Object doesn't support this property or method

    Read the article

  • How to port animation from one skeleton to another?

    - by shawn
    While I need to do this in a Blender3D modeler script, the math should be similar for other modelers or realtime engines. Blender3D specific terminology: Armature = skeleton EditBone = rest pose bone (stores the rest pose matrix) PoseBone = can store a different pose (animation matrix) for each frame of your animation I need to share animations (Blender Actions) between Armatures which have EditBones with same names and which have the same positions, but can have different (rest pose) angles and scales. Plus the Armatures might have different bone hierarchy (bone parenting/ no bone parenting). Why I need this: I've made an importer/exporter for a 3d format for a game. The format doesn't store enough info to connect/parent the bones, which makes posing/animating character models in a 3d modeller nearly impossible (original model files for the 3d modeler don't exist, this is for modding). As there are only 2 character skeleton types in the game, I decided to optionally allow to generate the bone from a hardcoded data in the model importer and undo that in the exporter. This allows to easily pose the model for checking weights, easily create weights, makes it easier for Blender to generate automatic weights and of course makes animating possible. This worked perfectly: the importer optionally generated the Armature itself and the exporter removed those changes, so the exported model works with existing animations in the game. But now I'm writing an importer and exporter for the game's animation format and here come the problems of: Trying to make original animations work in Blender with my "custom" (modified) Armature Trying to make animations created by using the "custom" (modified) Armature work with the original models in the game (and Blender). Constraints or bone snapping inside Blender won't work as they don't care that the bones have different angles in the rest pose, they will still face the same direction. It seems I just need to get the "difference" between the EditBone matrices of all EditBones for the two Armatures somehow and apply that difference to PoseBone matrices of all PoseBones, for all frames of my animation. I need to know how to get that difference and how to apply it. BTW, PoseBone matrices are relative to rest pose, they are by default [1.000000, 0.000000, 0.000000, 0.000000](matrix [row 0]) [0.000000, 1.000000, 0.000000, 0.000000](matrix [row 1]) [0.000000, 0.000000, 1.000000, 0.000000](matrix [row 2]) [0.000000, 0.000000, 0.000000, 1.000000](matrix [row 3]) So the question is: How to get the difference between two bone (EditBone) matrices to apply that difference to the animation matrices (PoseBone matrices)? Please be easy on the matrix math.

    Read the article

  • How to build a 4x game?

    - by Marco
    I'm trying to study how succefully implement a 4x game. Area of interest: 1) map data: how to store stellars systems (graphs?), how to generate them and so on.. 2) multiplayer: how to organize code in a non graphical server and a client to display it 3) command system: what are patters to catch user and ai decisions and handle them, adding at first "explore" and "colonize" then "combat", "research", "spy" and so on (commands can affect ships, planets, research, etc..) 4) ai system: ai can use commands to expand, upgrade planets and ship I know is a big questions, so help is appreciated :D 1) Map data Best choice is have a graph to model a galaxy. A node is a stellar system and every system have a list of planets. Ship cannot travel outside of predefined paths, like in Ascendancy: http://www.abandonia.com/files/games/221/Ascendancy_2.png Every connection between two stellar systems have a cost, in turns. Generate a galaxy is only a matter of: - dimension: number of stellar systems, - variety: randomize number of planets and types (desertic, earth, etc..), - positions of each stellar system on game space - connections: assure that exist a path between every node, so graph is "connected" (not sure if this a matematically correct term) 2) Multiplayer Game is organized in turns: player 1, player 2, ai1, ai2. Server take care of all data and clients just diplay it and collect data change. Because is a turn game, latency is not a problem :D 3) Command system I would like to design a hierarchy of commands to take care of this aspect: abstract Genericcommand (target) ExploreCommand (Ship) extends genericcommand colonizeCommand (Ship) buildcommand(planet, object) and so on. In my head all this commands are stored in a queue for every planets, ships or reasearch center or spy, and each turn a command is sent to a server to apply command and change data state 4) ai system I don't have any idea about this. Is a big topic and what I want is a simple ai. Something like "expand and fight against everyone". I think about a behaviour tree to control ai moves, so I can develop an ai that try to build ships to expand and then colonize planets, upgrade them throught science and combat enemies. Could be done with a finite state machine too ? any ideas, resources, article are welcome!

    Read the article

  • Accessing SSRS Report Manager on Windows 7 and Windows 2008 Server

    - by Testas
      Here is a problem I was emailed last night   Problem   SSRS 2008 on Windows 7 or Windows 2008 Server is configured with a user account that is a member of the administrator's group that cannot access report Manager without running IE as Administrator and adding  the SSRS server into trusted sites. (The Builtin administrators account is by default made a member of the System Administrator and Content Manager SSRS roles).   As a result the OS limits the use of using elevated permissions by removing the administrator permissions when accessing applications such as SSRS     Resolution - Two options   Continue to run IE as administrator, you must still add the report server site to trusted sites Add the site to trusted sites and manually add the user to the system administrator and content manager role Open a browser window with Run as administrator permissions. From the Start menu, click All Programs, right-click Internet Explorer, and select Run as administrator. Click Allow to continue. In the URL address, enter the Report Manager URL. Click Tools. Click Internet Options. Click Security. Click Trusted Sites. Click Sites. Add http://<your-server-name>. Clear the check box Require server certification (https:) for all sites in this zone if you are not using HTTPS for the default site. Click Add. Click OK. In Report Manager, on the Home page, click Folder Settings. In the Folder Settings page, click Security. Click New Role Assignment. Type your Windows user account in this format: <domain>\<user>. Select Content Manager. Click OK. Click Site Settings in the upper corner of the Home page. Click security. Click New Role Assignment. Type your Windows user account in this format: <domain>\<user>. Select System Administrator. Click OK. Close Report Manager. Re-open Report Manager in Internet Explorer, without using Run as administrator.   Problems will also exist when deploying SSRS reports from Business Intelligence Development Studio (BIDS) on Windows  7 or Windows 2008, therefore you should run Business Intelligence Development Studio as Administor   information on this issue can be found at <http://msdn.microsoft.com/en-us/library/bb630430.aspx>

    Read the article

  • Library to fake intermittent failures according to tester-defined policy?

    - by crosstalk
    I'm looking for a library that I can use to help mock a program component that works only intermittently - usually, it works fine, but sometimes it fails. For example, suppose I need to read data from a file, and my program has to avoid crashing or hanging when a read fails due to a disk head crash. I'd like to model that by having a mock data reader function that returns mock data 90% of the time, but hangs or returns garbage otherwise. Or, if I'm stress-testing my full program, I could turn on debugging code in my real data reader module to make it return real data 90% of the time and hang otherwise. Now, obviously, in this particular example I could just code up my mock manually to test against a random() routine. However, I was looking for a system that allows implementing any failure policy I want, including: Fail randomly 10% of the time Succeed 10 times, fail 4 times, repeat Fail semi-randomly, such that one failure tends to be followed by a burst of more failures Any policy the tester wants to define Furthermore, I'd like to be able to change the failure policy at runtime, using either code internal to the program under test, or external knobs or switches (though the latter can be implemented with the former). In pig-Java, I'd envision a FailureFaker interface like so: interface FailureFaker { /** Return true if and only if the mocked operation succeeded. Implementors should override this method with versions consistent with their failure policy. */ public boolean attempt(); } And each failure policy would be a class implementing FailureFaker; for example there would be a PatternFailureFaker that would succeed N times, then fail M times, then repeat, and a AlwaysFailFailureFaker that I'd use temporarily when I need to simulate, say, someone removing the external hard drive my data was on. The policy could then be used (and changed) in my mock object code like so: class MyMockComponent { FailureFaker faker; public void doSomething() { if (faker.attempt()) { // ... } else { throw new RuntimeException(); } } void setFailurePolicy (FailureFaker policy) { this.faker = policy; } } Now, this seems like something that would be part of a mocking library, so I wouldn't be surprised if it's been done before. (In fact, I got the idea from Steve Maguire's Writing Solid Code, where he discusses this exact idea on pages 228-231, saying that such facilities were common in Microsoft code of that early-90's era.) However, I'm only familiar with EasyMock and jMockit for Java, and neither AFAIK have this function, or something similar with different syntax. Hence, the question: Do such libraries as I've described above exist? If they do, where have you found them useful? If you haven't found them useful, why not?

    Read the article

  • Stop Saying "Multi-Channel!"

    - by David Dorf
    I keep hearing the term "multi-channel" in our industry, but its time to move on. It kinda reminds me of the term "ECR" or electronic cash register. Long ago ECR was a leading-edge term, but nowadays its rarely used because its table-stakes. After all, what cash register today isn't electronic? The same logic applies to multi-channel, at least when we're talking about tier-1 and tier-2 retailers. If you're still talking about multi-channel retailing, you're in big trouble. Some have switched over to the term "cross-channel," and that's a step in the right direction but still falls short. Its kinda like saying, "I upgraded my ECR to accept debit cards!" Yawn. Who hasn't? Today's retailers need to focus on omni-channel, which I first heard from my friends over at RSR but was originally coined at IDC. First retailers added e-commerce to their store and catalog channels yielding multi-channel retailing. Consumers could use the channel that worked best for them. Then some consumers wanted to combine channels with features like buy-on-the-Web, pickup-in-the-store. Thus began the cross-channel initiatives to breakdown the silos and enable the channels to communicate with each other. But the multi-channel architecture is full of duplication that thwarts efforts of providing a consistent experience. Each has its own cart, its own pricing, and often its own CRM. This was an outcrop of trying to bring the independent channels to market quickly. Rather than reusing and rebuilding existing components to meet the new demands, silos were created that continue to exist today. Today's consumers want omni-channel retailing. They want to interact with brands in a consistent manner that is channel transparent, yet optimized for that particular interaction. The diagram below, from the soon-to-be-released NRF Mobile Blueprint v2, shows this progression. For retailers to provide an omni-channel experience, there needs to be one logical representation of products, prices, promotions, and customers across all channels. The only thing that varies is the presentation of the content based on the delivery mechanism (e.g. shelf labels, mobile phone, web site, print, etc.) and often these mechanisms can be combined in various ways. I'm looking forward to the day in which I can use my phone to scan QR-codes in a catalog to create a shopping cart of items. Then do some further research on the retailer's Web site and be told about related items that might interest me. Be able to easily solicit opinions and reviews from social sites, and finally enter the store to pickup my items, knowing that any applicable coupons have been applied. In this scenario, I the consumer are dealing with a single brand that is aware of me and my needs throughout the entire transaction. Nirvana.

    Read the article

  • Correcting color-shifted mirrored i915 driver in 12.04?

    - by Will Martin
    I was called in to fix a friend's malfunctioning HP Pavilion. She's not sure exactly which model, but the sticker on the bottom says "G60". The problem was a failed upgrade to 12.04. I was able to mostly repair it with sudo apt-get -f install, which ran setup and configuration for several hundred packages. The biggest problem at the moment is Xorg. The login screen (lightdm) loads normally but at a reduced resolution (1024x768 instead of 1366x768). But once you log in, it looks like this: Observe that the colors of the dock on the left and the bar at the top are normal. But the background is filled with bizarro color-skewed ghost images of the desktop. In all cases, the actual contents of any programs you run is a totally illegible mess, except that the bar at the top of any program windows looks and acts normally. And the ghost images are interactive! For example, if you click the icon in the top right corner to get the "shut down" menu, the same menu will appear in the ghost images below. Starting a terminal will start a terminal window in both the real desktop and the ghost images, and moving it around updates both the real and ghost desktops. I suspect Xorg is using some kind of wrong driver and/or parameter for the graphics hardware. Here is the graphics-relevant portion of the lspci -v output: 00:00.0 Host bridge: Intel Corporation Mobile 4 Series Chipset Memory Controller Hub (rev 09) Subsystem: Hewlett-Packard Company Device 360b Flags: bus master, fast devsel, latency 0 Capabilities: [e0] Vendor Specific Information: Len=0a <?> Kernel driver in use: agpgart-intel 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 09) (prog-if 00 [VGA controller]) Subsystem: Hewlett-Packard Company Device 360b Flags: bus master, fast devsel, latency 0, IRQ 44 Memory at d0000000 (64-bit, non-prefetchable) [size=4M] Memory at c0000000 (64-bit, prefetchable) [size=256M] I/O ports at 5110 [size=8] Expansion ROM at <unassigned> [disabled] Capabilities: [90] MSI: Enable+ Count=1/1 Maskable- 64bit- Capabilities: [d0] Power Management version 3 Kernel driver in use: i915 Kernel modules: i915 00:02.1 Display controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 09) Subsystem: Hewlett-Packard Company Device 360b Flags: bus master, fast devsel, latency 0 Memory at d2500000 (64-bit, non-prefetchable) [size=1M] Capabilities: [d0] Power Management version 3 I'm not sure what to check next. I would ordinarily check xorg.conf to see what it says, but that apparently doesn't exist any more, and my googling has not yielded any useful techniques for getting Xorg to tell me what settings it decided to use. The weird part is that it works fine on the login screen. It's only when you actually log in as a user that the display gets screwed up. Suggestions?

    Read the article

  • Updates about Multidimensional vs Tabular #ssas #msbi

    - by Marco Russo (SQLBI)
    I recently read the blog post from James Serra Tabular model: Not ready for prime time? (read also the comments because there are discussions about a few points raised by James) and the following post from Christian Wade Multidimensional or Tabular. In the last 2 years I worked with many companies adopting Tabular in different scenarios and I agree with some of the points expressed by James in his post (especially about missing features in Tabular if compared to Multidimensional), but I strongly disagree in others. In general, Tabular is a good choice for a new project when: the development team does not have a good knowledge of Multidimensional and MDX (DAX is faster to learn, not so easy as it is sold by MS, but definitely easier than MDX) you don’t need calculations based on hierarchies (common in certain financial applications, but not so common as it could seem) there are important calculations based on distinct count measures there are complex calculations based on many-to-many relationships Until now, I never suggested to migrate an existing Multidimensional model to a Tabular one. There should be very important reasons for that, such as performance issues in distinct count and many-to-many relationships that cannot be easily solved by optimizing the Multidimensional model, but I still never encountered this scenario. I would say that in 80% of the new projects, you might use either Multidimensional or Tabular and the real difference is the time-to-market depending on the skills of the development team. So it’s not strange that who is used to Multidimensional is not moving to Tabular, not getting a particular benefit from the new model unless specific requirements exist. The recent DAXMD feature that allows using SharePoint Power View on Multidimensional is a really important one, even if I’d like having also Excel Power View enabled for this scenario (this should be just a question of time). Another scenario in which I’m seeing a growing adoption of Tabular is in companies that creates models for their product/service and do that by using XMLA or Tabular AMO 2012. I am used to call them ISVs, even if those providing services cannot be really defined in this way. These companies are facing the multitenancy challenge with Tabular and even if this is a niche market, I see some potential here, because adopting Tabular seems a much more natural choice than Multidimensional in those scenario where an analytical engine has to be embedded to deliver one of the features of a larger product/service delivered to customers. I’d like to see other feedbacks in the comments: tell your story of choosing between Tabular and Multidimensional in a BI project you started with SQL Server 2012, thanks!

    Read the article

  • GNU/Linux interactive table content GUI editor?

    - by sdaau
    I often find myself in the need to gather data (say from the internet), into a table, for comparison reasons. I usually need the final table output in HTML or MediaWiki mostly, but often times also Latex. My biggest problem is that I often forget the correct table syntax for these markup languages, as well as what needs to be properly escaped in the inline data, for the table to render correctly. So, I often wish there was a GUI application, which provides a tabular framework - which I could stick "Always on Top" as a desktop window, and I could paste content into specific cells - before finally exporting the table as a code in the correct language. One application that partially allows this is Open/LibreOffice calc: The good thing here is that: I can drag and drop browser content into a specifically targeted table cell (here B2) "Rich" text / HTML code gets pasted For long content, the cell (column) width stays put as it originally was The bad thing is, that: when the cell height (due to content size) becomes larger than the calc window, it becomes nearly impossible to scroll calc contents up and down (at least with the mousewheel), as the view gets reset to top-right corner of the selected cell calc shows an "endless"/unlimited field of cells, so not exactly a "table" - which I find visually very confusing (and cognitively taxing) Can only export table to HTML What I would need is an application that: Allows for a limited size table, but with quick adding of rows and columns (e.g. via corresponding + buttons) Allows for quick setup of row and column height and width (as well as table size) Stays put at those sizes, regardless of size of content pasted in; if cell content overflows, cell scrollbars are shown (cell content could be possibly re-edited in a separate/new window); if table overflows over window size, window scrollbars are shown Exports table in multiple formats (I'd need both HTML and mediawiki), properly escaping cell content for each (possibility to strip HTML tags from content pasted in cells, to get plain text, is a plus) Targeting a specific cell in the table for the content paste operation is a must - it doesn't have to be drag'n'drop though, a right click over a cell with "Paste content" is enough. I'd also want the ability to click in a specific cell and type in (plain text) content immediately. So, my question is: is there an application out there that already does something like this? The reason I'm asking is that - as the screenshots show - for instance Libre/OpenOffice allows it, but only somewhat (as using it for that purpose is tedious). I know there exist some GUI editors for Linux (both for UI like guile or HTML like amaya); but I don't know them enough to pinpoint if any of them would offer this kind of functionality (and at least in my searches, that kind of functionality, if present in diverse software, seems not to be advertised). Note I'm not interested in styling an HTML table, which is why I haven't used "table designer" in the title, but "table editor" (in lack of better terms) - I'm interested in (quickly) adjusting row/column size of the table, and populating it with pasted data (which is possibly HTML) in a GUI; and finally exporting such a table as self-contained HTML (or other) code.

    Read the article

  • How to set the monitor to its native resolution when xrandr approach isn't working?

    - by Krishna Kant Sharma
    I am trying to setup my Samsung syncmaster B2030 monitor in ubuntu 12.04. It's native resolution is 1600x900 which I am not getting in ubuntu and which I am trying to get. I tried using xrandr approach provided in these urls: 1) http://www.ubuntugeek.com/how-change-display-resolution-settings-using-xrandr.html 2) How to set the monitor to its native resolution which is not listed in the resolutions list? S1) I used cvt 1600 900 60 to get the modeline. Output was: # 1600x900 59.95 Hz (CVT 1.44M9) hsync: 55.99 kHz; pclk: 118.25 MHz Modeline "1600x900_60.00" 118.25 1600 1696 1856 2112 900 903 908 934 -hsync +vsync S2) I then used xrandr and output was: Screen 0: minimum 8 x 8, current 1152 x 864, maximum 8192 x 8192 DVI-I-0 disconnected (normal left inverted right x axis y axis) VGA-0 connected 1152x864+0+0 (normal left inverted right x axis y axis) 0mm x 0mm 1024x768 60.0 + 1360x768 60.0 59.8 1152x864 60.0* 800x600 72.2 60.3 56.2 680x384 119.9 119.6 640x480 59.9 512x384 120.0 400x300 144.4 320x240 120.1 DVI-I-1 disconnected (normal left inverted right x axis y axis) HDMI-0 disconnected (normal left inverted right x axis y axis) which gave me "VGA-0". S3) Then I used xrandr --newmode "1600x900_60.00" 118.25 1600 1696 1856 2112 900 903 908 934 -hsync +vsync But instead of adding the modeline it just threw an error: X Error of failed request: BadName (named color or font does not exist) Major opcode of failed request: 153 (RANDR) Minor opcode of failed request: 16 (RRCreateMode) Serial number of failed request: 29 Current serial number in output stream: 29 My system details: 1) ubuntu 12.04 LTS 2) Graphic card: GeForce 9400 GT/PCIe/SSE2 (driver is successfully installed. I am checking it in System Settings Details. And it's showing that driver is installed and its "GeForce 9400 GT/PCIe/SSE2") 3) Monitor: Samsung syncmaster B2030 4) Resolutions I am getting: 800x600 1024x768 1152x864 (I am currently using this one) 1360x768 (this one isn't working properly) Does anyone know what I can do? Thanks in advance. UPDATE (1): Today I tried it again. And adding a modeline (using --newmode) worked. But when I used --addmode by: xrandr --addmode VGA-0 1600x900_60.00 It gave this error: X Error of failed request: BadMatch (invalid parameter attributes) Major opcode of failed request: 153 (RANDR) Minor opcode of failed request: 18 (RRAddOutputMode) Serial number of failed request: 29 Current serial number in output stream: 30

    Read the article

  • Cool Tools You Can Use: Validation Templates for PeopleSoft Contracts Processes

    - by Mark Rosenberg
    This is the first in a series of postings we’ll be making under the heading of Cool Tools You Can Use. Our PeopleSoft product management team identified the need for this series after reflecting on the many conversations we have each year with our PeopleSoft community members. During these conversations, we were discovering that customers and implementation partners were often not aware that solutions exist to the problems they were trying to address and that the solutions were readily available at no additional charge. Thus, the Cool Tools You Can Use series will describe the business challenge we’ve heard, the PeopleSoft solution to the challenge, and how you can learn more about the solution so that everyone can be sure to make full use of what PeopleSoft applications have to offer. The first cool tool we’ll look at is the Validation Template for PeopleSoft Contracts Process Requests, which was first released in December 2013 as part of PeopleSoft Contracts 9.2 Update Image 4. The business issue our customers highlighted to us is the need to tightly control but easily configure and manage the scope of data that any user can process when initiating a process. Control of each user’s span of impact is essential to reducing billing reconciliation issues, passing span of authority audits, and reducing (or even eliminating) the frequency of unexpected process results.  Setting Up the Validation Template for a PeopleSoft Contracts Process With the validation template, organizations can easily and quickly ensure the software restricts the scope of transactions a user can affect and gives organizations the confidence to know that business processes are being governed effectively. Additionally, this control of PeopleSoft Contracts process requests can be applied and easily maintained and adjusted from a web browser thereby enabling analysts to administer the rules without having to engage software developers to customize the software. During the field validation template setup, an analyst specifies the combinations of fields that must contain values when a user tries to setup a run control and initiate a PeopleSoft Contracts process from a process request page. For example, for the Process Limits component, an organization could require that users enter a valid combination of values for the business unit, contract, and contract type fields or a value in the contract administrator field. Until the user enters a valid combination of entries on the process request page, he cannot launch the process. With the validation template activated for process request pages, organizations can be confident that PeopleSoft Contracts users will not accidentally begin generating invoices or triggering other revenue management processes for transactions beyond their scope of authority. To learn more about the Validation Template, please review the Defining Validation Templates section of the PeopleSoft Contracts PeopleBooks. 

    Read the article

  • SQL SERVER – Backup SQL databases to Box or SkyDrive

    - by Pinal Dave
    To ensure your SQL Server or Azure databases remain safe, you should backup your databases periodically. And it is important to store the backups in a reliable location. Microsoft SkyDrive currently offers 7GB free, Box offers 5GB free – both are reliable and it is simple to send your backups there. SQLBackupAndFTP in it’s latest version 9 added the option to backup to SkyDrive and Box ( in addition to local/network folder, NAS drive, FTP, Dropbox, Google Drive and Amazon S3). Just select the databases that you’d like to backup and select to store the backups in SkyDrive or Box. Below I will show you how to do it in details Select databases to backup First connect to your SQL Server or Azure Sql Database. Then select the databases you’d like to backup. Connect to SkyDrive or Box cloud If you have a free version of SQLBackupAndFTP Box destination is included, but SkyDrive destination will be disabled as it is available in the Standard version or above. Click “Try now” to get 30 days trial on all options On the “SkyDrive Settings” form you’ll need to authorize SQLBackupAndFTP to access your SkyDrive. Click “Authorize…” to open SkyDrive authorization page in your browser, sign in your to SkyDrive account and click at “Allow” . On the next page you will see the field with authorization code. Copy it to the clipboard. Box operation is just the same. After that return to SQLBackupAndFTP, paste the authorization code and click “OK” . After you are authorized, you can enter the path to a backup folder. SQLBackupAndFTP will create the folder if it does not exist. That’s all what has to be done to backup to SkyDrive or Box cloud.  You can now click on “Run Now” button to test this job. Conclusion Whatever is your preference for storing SQL backups, it is easy with SQLBackupAndFTP. Note that at the time of this writing they are running a very rare promotion on volume licenses: 5–9 licenses: 20% off 10–19 licenses: 35% off more than 20 licenses: 50% off Please let me know your favorite options for storing the backups. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Mixing Forms and Token Authentication in a single ASP.NET Application

    - by Your DisplayName here!
    I recently had the task to find out how to mix ASP.NET Forms Authentication with WIF’s WS-Federation. The FormsAuth app did already exist, and a new sub-directory of this application should use ADFS for authentication. Minimum changes to the existing application code would be a plus ;) Since the application is using ASP.NET MVC this was quite easy to accomplish – WebForms would be a little harder, but still doable. I will discuss the MVC solution here. To solve this problem, I made the following changes to the standard MVC internet application template: Added WIF’s WSFederationAuthenticationModule and SessionAuthenticationModule to the modules section. Add a WIF configuration section to configure the trust with ADFS. Added a new authorization attribute. This attribute will go on controller that demand ADFS (or STS in general) authentication. The attribute logic is quite simple – it checks for authenticated users – and additionally that the authentication type is set to Federation. If that’s the case all is good, if not, the redirect to the STS will be triggered. public class RequireTokenAuthenticationAttribute : AuthorizeAttribute {     protected override bool AuthorizeCore(HttpContextBase httpContext)     {         if (httpContext.User.Identity.IsAuthenticated &&             httpContext.User.Identity.AuthenticationType.Equals( WIF.AuthenticationTypes.Federation, StringComparison.OrdinalIgnoreCase))         {             return true;         }                     return false;     }     protected override void HandleUnauthorizedRequest(AuthorizationContext filterContext)     {                    // do the redirect to the STS         var message = FederatedAuthentication.WSFederationAuthenticationModule.CreateSignInRequest( "passive", filterContext.HttpContext.Request.RawUrl, false);         filterContext.Result = new RedirectResult(message.RequestUrl);     } } That’s it ;) If you want to know why this works (and a possible gotcha) – read my next post.

    Read the article

< Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >