Search Results

Search found 25785 results on 1032 pages for 'oracles solution for soc'.

Page 504/1032 | < Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >

  • A simple Volume Replication Tool for large data set?

    - by Jin
    I'm looking for a solution to the following: Server A (Site A) - Win 2008 R2 - approx 10TB (15TB max) of data - well over 8 million files Server B (Site B) - Win 2008 R2 I want to assynchronously replicate Server A's volume to a volume on Server B for data redundancy. Something that I can say to my users, "go here for data" when/if Server A goes belly up due to machine problems, disaster, etc. Windows 2008 R2 does have DFS, but microsoft does not apparently support this large of a dataset (or more accurately, more than 8 million files - according to the docs I could find). I also looked at Veritas Volume Replication, but this seems almost too much as I would also require Veritas Volume Manager. There are numerous "back-up" software which makes a 1-1 backup, which would be ok, but since it will be transfering over internet, I'd like something that has compression during transfer like DFS has. Does anyone have any suggestions regarding this?

    Read the article

  • Thunderbird: moving email from local Junk folder to IMAP folder yields "Message contains invalid header"

    - by Peltier
    Whenever I try to move an email from a local Junk folder to an IMAP folder in Thunderbird, I get the following error message: The current command did not succeed. The mail server responded: Message contains invalid header If Thunderbird's Junk folder is an IMAP folder on the server, then after Thunderbird has moved messages to that folder, I can successfully move messages from Junk back into to some other IMAP folder. However, if the Junk folder is not on the server, then moving a message from the local Junk folder to an IMAP folder yields the aforementioned error. The only interesting thing I've found about this error is "Message contains invalid header" from the MozillaZine Knowledge Base. That article officially is about importing folders from another email client, and does not mention the Junk filter as another possible cause. However the proposed solution is not very helpful since it requires manual editing of the message box files. Any better ideas? EDIT: make sure you read the comments before answering the question.

    Read the article

  • ArchBeat Top 20 for March 25-31, 2012

    - by Bob Rhubart
    The top 20 most-clicked links as shared via my social networks for the week of March 25-31, 2012. Oracle Cloud Conference: dates and locations worldwide The One Skill All Leaders Should Work On | Scott Edinger BPM in Retail Industry | Sanjeev Sharma Oracle VM: What if you have just 1 HDD system | @yvelikanov Solution for installing the ADF 11.1.1.6.0 Runtimes onto a standalone WLS 10.3.6 | @chriscmuir Beware the 'Facebook Effect' when service-orienting information technology | @JoeMcKendrick Using Oracle VM with Amazon EC2 | @pythianfielding Oracle BPM: Adding an attachment during the Human Task Initialization | Manh-Kiet Yap When Your Influence Is Ineffective | Chris Musselwhite and Tammie Plouffe Oracle Enterprise Pack for Eclipse 12.1.1 update on OTN  A surefire recipe for cloud failure | @DavidLinthicum  IT workers bore brunt of offshoring over past decade: analysis | @JoeMcKendrick Private cloud-public cloud schism is a meaningless distraction | @DavidLinthicum Oracle Systems and Solutions at OpenWorld Tokyo 2012 Dissing Architects, or "What's wrong with the coffee?" | Bob Rhubart Validating an Oracle IDM Environment (including a Fusion Apps build out) | @FusionSecExpert Cookbook: SES and UCM setup | George Maggessy Red Samurai Tool Announcement - MDS Cleaner V2.0 | @AndrejusB OSB/OSR/OER in One Domain - QName violates loader constraints | John Graves Spring to Java EE Migration, Part 3 | @ensode Thought for the Day "Inspire action amongst your comrades by being a model to avoid." — Leon Bambrick

    Read the article

  • Using both domain users and local users for Squid authentication?

    - by Massimo
    I'm working on a Squid proxy which needs to authenticate users against an Active Directory domain; this works fine, Samba was correctly set up and Squid authenticates users via ntlm_auth. Relevant lines in squid.conf: auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp auth_param ntlm children 5 auth_param ntlm keep_alive on acl Authenticated proxy_auth REQUIRED http_access allow Authenticated http_access deny all Now, I need a way to allow access to users which don't have a domain account. I know I could create an "internet user" account in the domain, but this would allow access, although limited, to domain resources (file shares, etc.); I need something that will allow only Internet access. The ideal solution would be using a local account on the proxy server, either a Linux account or a Squid one; I know Squid supports this, but I'm unable to have it use both domain authentication and Squid/local authentication if domain auth is unsuccesful. Can this be done? How?

    Read the article

  • Can't start webcam for google video services

    - by wisemonkey
    I've got Ubuntu 11.10 64 bit and have installed the google video chat plugin. However webcam doesn't seem to work (black screen -- no video at all). For cheese it works but shows really bad (black and white kinda) image. Following some link I installed guvcview if I start it then image looks neat. Any suggestions on how can it be fixed? If it helps I've tried the solution: $ sudo mv /opt/google/talkplugin/GoogleTalkPlugin /opt/google/talkplugin/GoogleTalkPlugin.old $ sudo gedit /opt/google/talkplugin/GoogleTalkPlugin and putting following lines in: #!/bin/sh LD_PRELOAD=/usr/lib32/libv4l/v4l1compat.so /opt/google/talkplugin/GoogleTalkPlugin.old OR #!/bin/sh LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libv4l/v4l1compat.so /opt/google/talkplugin/GoogleTalkPlugin.old Cause I've both files. Finally $sudo chmod +x /opt/google/talkplugin/GoogleTalkPlugin.old I closed and reopened chrome then started gmail tried video call -- black screen :-/ Ok so today finally google+ provided me with trouble shoot link and advised me: The plug-in won't install If you're having trouble installing the plug-in, or are receiving a message asking you to reinstall it, you should check to make sure your configuration is right. To do so simply: Check to make sure the Google Talk Plugin Video Accelerator and Google Talk NPAPI Plugin are enabled. If you're using Chrome you can type about:plugins in your browser to display your plug-ins. Make sure you're not using Internet Explorer 64-bit (this is a browser version that is 64 bit as opposed to 32 bit). Ensure that you don't have any "click to run" extensions enabled. If you're still experiencing this issue after checking your configuration you can follow these steps: Refresh the browser page. Close any running Google Talk plug-in processes. Close all open and running browser processes. Restart your computer. Uninstall and then reinstall the plug-in. Try a different browser such as Google Chrome or Mozilla Firefox. I looked in about:plugins for chrome and firefox: I don't have Google Talk NPAPI Plugin, does that matter? and I thought its installed with google talk plugin or no?

    Read the article

  • One-week release cycle: how do I make this feasible?

    - by Arkaaito
    At my company (3-yr-old web industry startup), we have frequent problems with the product team saying "aaaah this is a crisis patch it now!" (doesn't everybody?) This has an impact on the productivity (and morale) of engineering staff, self included. Management has spent some time thinking about how to reduce the frequency of these same-day requests and has come up with the solution that we are going to have a release every week. (Previously we'd been doing one every two weeks, which usually slipped by a couple of days or so.) There are 13 developers and 6 local / 9 offshore testers; the theory is that only 4 developers (and all testers) will work on even-numbered releases, unless a piece of work comes up that really requires some specific expertise from one of the other devs. Each cycle will contain two days of dev work and two days of QA work (plus 1 day of scoping / triage / ...). My questions are: (a) Does anyone have experience with this length of release cycle? (b) Has anyone heard of this length of release cycle even being attempted? (c) If (a) or (b), how on Earth do you make it work? (Any pitfalls to avoid, etc., are also appreciated.) (d) How can we minimize the damage if this effort fails?

    Read the article

  • Change order of monitors without changing fullscreen"size"

    - by user171489
    I have a dual monitor setup. My primary monitor is a 22" with a max resolution of 1680x1050 and my secondary is a 19" with a max resolution of 1280x1024. The secondary is standing on the left side of the primary one. My problem now is, that, if I change the order of the monitors in my nvidia x-server settings, so that my secondary is the first one (or the one on the left), the fullscreen mode in flash in scaled up to my secondary monitor, even if it´s displayed on my primary one. Meaning that i get a 1280x1024 "fullscreen" window on my bigger primary monitor. When I configure my x-server settings so the secondary monitor is the one on the right, I don´t have this problem. The only thing then is, that I have to scroll out on the right to get to my monitor on the right. I can´t move my secondary monitor on the right side of my primary due to lack of space and my belief that there must be a software solution. ;) Thanks in advance.

    Read the article

  • Personal Software Process (PSP1)

    - by gentoo_drummer
    I'm trying to figure out an exercise but it doesn't really makes to much sense.. I'm not asking someone to provide the solution. just to try and analyse what needs to be done in order to solve this. I'm trying to understand which PSP 1.0 1.1 process I should use. PROBE? Or something else? I would greatly appreciate some help on this one from someone that has experience with the Personal Software Process Methodology.. Here is the question. For the reference case (“code1.c”), the following s/w metrics are provided: man-hours spent in implementation phase (per-module): 2,7 mh/file man-hours spent in testing phase (per-module): 4,3 mh/file estimated number of bugs remaining (per-module): 0,3 errors/function, 4 errors/module (remaining) Based on the corresponding values provided for the reference case, each of the following tasks focus on some s/w metrics to be estimated for the test case (“code2.c”): [25 marks] (estimated) man-hours required in implementation phase (per-module) [8 marks] (estimated) man-hours required in testing phase (per-module) [8 marks] (estimated) number of bugs remaining at the end of testing phase (per-module) [9 marks] Tasks 4 through 6 should use the data provided for the reference case within the context of Personal Software Process level-1 (PSP-1), using them as a single-point historic data log. Specifically, the same s/w metrics are to be estimated for the test case (“code2.c”), using PSP as the basic estimation model. In order to perform the above listed tasks, students are advised to consider all phases of the PSP software development process, especially at levels PSP0 and PSP1. Both cases are to be treated as separate case-studies in the context of classic s/w development.

    Read the article

  • Using the Java SE 8 Date Time API with JPA 2.1

    - by reza_rahman
    Most of you are hopefully aware of the new Date Time API included in Java SE 8. If you are not, you should check them out right now using the Java Tutorial Trail dedicated to the topic. It is a significantly leap forward in processing temporal data in Java. For those who already use Joda-Time the changes will look very familiar - very simplistically speaking the Java SE 8 feature is basically Joda-Time standardized. Quite naturally you will likely want to use the new Date Time APIs in your JPA domain model to better represent temporal data. The problem is that JPA 2.1 will not support the new API out of the box. So what are you to do? Fortunately you can make use of fairly simple JPA 2.1 Type Converters to use the Date Time API in your JPA domain classes. Steven Gertiser shows you how to do it in an extremely well written blog entry. Besides explaining the problem and the solution the entry is actually very good for getting a better understanding of JPA 2.1 Type Converters as well. I think such a set of converters may be a good fit for Apache DeltaSpike as a Java EE 7 extension? In case you are wondering about Java SE 8 support in the JPA specification itself, Nick Williams has already entered an excellent, well researched JIRA entry asking for such support in a future version of the JPA specification that's well worth looking at. Another possibility of course is for JPA providers to start supporting the Date Time API natively before anything is formalized in the specification. What do you think?

    Read the article

  • Can you add doubleclick macros to exisiting ads

    - by picus
    Setup: A few weeks back I made some very simple html5 "ads" to run on a few of our partner sites. They weren't paid ads as we also manage these sites, however there are a few of them, so I made a modular solution that is hosted on one of our web servers and included on each page via javascript which outputs an iframe. Each search (ad has a search box) or click appends a url param that we track using custom vars in Google Analytics. In essence, the ad is a HTML page served in an iframe via javscript. Problem: We have an opportunity to run these ads on a third party site, I had sent them a brief how-to for inserting them and they came back saying that: The creative code doesn't contain the %u macro. We can’t substitute the default click-through URL without it. I am somewhat familiar with doubleclick from a web developer's POV, i have inserted DC dart tags before and even have implemented the ad tool for publishers. I have not, however, actually ever created an ad for the doubleclick network before. I assume the publisher needs these tags to track clicks and hence charge us. However, they have not responded to me in regards to these questions. Are macros something I can just add to or replace the existing links with, or do I need to completely setup the ad with doubleclcik - a big issue in the short term given we do not have a advertiser's account set up with them. Thanks in advance

    Read the article

  • Using SPServices &amp; jQuery to Find My Stuff from Multi-Select Person/Group Field

    - by Mark Rackley
    Okay… quick blog post for all you SPServices fans out there. I needed to quickly write a script that would return all the tasks currently assigned to me.  I also wanted it to return any task that was assigned to a group I belong to. This can actually be done with a CAML query, so no big deal, right?  The rub is that the “assigned to” field is a multi-select person or group field. As far as I know (and I actually know so little) you cannot just write a CAML query to return this information. If you can, please leave a comment below and disregard the rest of this blog post… So… what’s a hacker to do? As always, I break things down to their most simple components (I really love the KISS principle and would get it tattooed on my back if people wouldn’t think it meant “Knights In Satan’s Service”. You really gotta be an old far to get that reference).  Here’s what we’re going to do: Get currently logged in user’s name as it is stored in a person field Find all the SharePoint groups the current user belongs to Retrieve a set of assigned tasks from the task list and then find those that are assigned to current  user or group current user belongs to Nothing too hairy… So let’s get started Some Caveats before I continue There are some obvious performance implications with this solution as I make a total of four SPServices calls and there’s a lot of looping going on. Also, the CAML query in this blog has NOT been optimized. If you move forward with this code, tweak it so that it returns a further subset of data or you will see horrible performance if you have a few hundred entries in your task list. Add a date range to the CAML or something. Find some way to limit the results as much as possible. Lastly, if you DO have a better solution, I would like you to share. Iron sharpens iron and all…   Alright, let’s really get started. Get currently logged in user’s name as it is stored in a person field First thing we need to do is understand how a person group looks when you look at the XML returned from a SharePoint Web Service call. It turns out it’s stored like any other multi select item in SharePoint which is <id>;#<value> and when you assign a person to that field the <value> equals the person’s name “Mark Rackley” in my case. This is for Windows Authentication, I would expect this to be different in FBA, but I’m not using FBA. If you want to know what it looks like with FBA you can use the code in this blog and strategically place an alert to see the value.  Anyway… I need to find the name of the user who is currently logged in as it is stored in the person field. This turns out to be one SPServices call: var userName = $().SPServices.SPGetCurrentUser({                     fieldName: "Title",                     debug: false                     }); As you can see, the “Title” field has the information we need. I suspect (although again, I haven’t tried) that the Title field also contains the user’s name as we need it if I was using FBA. Okay… last thing we need to do is store our users name in an array for processing later: myGroups = new Array(); myGroups.push(userName); Find all the SharePoint groups the current user belongs to Now for the groups. How are groups returned in that XML stream?  Same as the person <ID>;#<Group Name>, and if it’s a mutli select it’s all returned in one big long string “<ID>;#<Group Name>;#<ID>;#<Group Name>;#<ID>;#<Group Name>;#<ID>;#<Group Name>;#<ID>;#<Group Name>”.  So, how do we find all the groups the current user belongs to? This is also a simple SPServices call. Using the “GetGroupCollectionFromUser” operation we can find all the groups a user belongs to. So, let’s execute this method and store all our groups. $().SPServices({       operation: "GetGroupCollectionFromUser",       userLoginName: $().SPServices.SPGetCurrentUser(),       async: false,       completefunc: function(xData, Status) {          $(xData.responseXML).find("[nodeName=Group]").each(function() {                 myGroups.push($(this).attr("Name"));          });         }     }); So, all we did in the above code was execute the “GetGroupCollectionFromUser” operation and look for the each “Group” node (row) and store the name for each group in our array that we put the user’s name in previously (myGroups). Now we have an array that contains the current user’s name as it will appear in the person field XML and  all the groups the current user belongs to. The Rest Now comes the easy part for all of you familiar with SPServices. We are going to retrieve our tasks from the Task list using “GetListItems” and look at each entry to see if it belongs to this person. If it does belong to this person we are going to store it for later processing. That code looks something like this: // get list of assigned tasks that aren't closed... *modify the CAML to perform better!*             $().SPServices({                   operation: "GetListItems",                   async: false,                   listName: "Tasks",                   CAMLViewFields: "<ViewFields>" +                             "<FieldRef Name='AssignedTo' />" +                             "<FieldRef Name='Title' />" +                             "<FieldRef Name='StartDate' />" +                             "<FieldRef Name='EndDate' />" +                             "<FieldRef Name='Status' />" +                             "</ViewFields>",                   CAMLQuery: "<Query><Where><And><IsNotNull><FieldRef Name='AssignedTo'/></IsNotNull><Neq><FieldRef Name='Status'/><Value Type='Text'>Completed</Value></Neq></And></Where></Query>",                     completefunc: function (xData, Status) {                         var aDataSet = new Array();                        //loop through each returned Task                         $(xData.responseXML).find("[nodeName=z:row]").each(function() {                             //store the multi-select string of who task is assigned to                             var assignedToString = $(this).attr("ows_AssignedTo");                             found = false;                            //loop through the persons name and all the groups they belong to                             for(var i=0; i<myGroups.length; i++) {                                 //if the person's name or group exists in the assigned To string                                 //then the task is assigned to them                                 if (assignedToString.indexOf(myGroups[i]) >= 0){                                     found = true;                                     break;                                 }                             }                             //if the Task belongs to this person then store or display it                             //(I'm storing it in an array)                             if (found){                                 var thisName = $(this).attr("ows_Title");                                 var thisStartDate = $(this).attr("ows_StartDate");                                 var thisEndDate = $(this).attr("ows_EndDate");                                 var thisStatus = $(this).attr("ows_Status");                                                                  var aDataRow=new Array(                                     thisName,                                     thisStartDate,                                     thisEndDate,                                     thisStatus);                                 aDataSet.push(aDataRow);                             }                          });                          SomeFunctionToDisplayData(aDataSet);                     }                 }); Some notes on why I did certain things and additional caveats. You will notice in my code that I’m doing an AssignedToString.indexOf(GroupName) to see if the task belongs to the person. This could possibly return bad results if you have SharePoint Group names that are named in such a way that the “IndexOf” returns a false positive.  For example if you have a Group called “My Users” and a group called “My Users – SuperUsers” then if a user belonged to “My Users” it would return a false positive on executing “My Users – SuperUsers”.IndexOf(“My Users”). Make sense? Just be aware of this when naming groups, we don’t have this problem. This is where also some fine-tuning can probably be done by those smarter than me. This is a pretty inefficient method to determine if a task belongs to a user, I mean what if a user belongs to 20 groups? That’s a LOT of looping.  See all the opportunities I give you guys to do something fun?? Also, why am I storing my values in an array instead of just writing them out to a Div? Well.. I want to pass my data to a jQuery library to format it all nice and pretty and an Array is a great way to do that. When all is said and done and we put all the code together it looks like:   $(document).ready(function() {         var userName = $().SPServices.SPGetCurrentUser({                     fieldName: "Title",                     debug: false                     });         myGroups = new Array();     myGroups.push(userName );       $().SPServices({       operation: "GetGroupCollectionFromUser",       userLoginName: $().SPServices.SPGetCurrentUser(),       async: false,       completefunc: function(xData, Status) {          $(xData.responseXML).find("[nodeName=Group]").each(function() {                 myGroups.push($(this).attr("Name"));          });                      // get list of assigned tasks that aren't closed... *modify this CAML to perform better!*             $().SPServices({                   operation: "GetListItems",                   async: false,                   listName: "Tasks",                   CAMLViewFields: "<ViewFields>" +                             "<FieldRef Name='AssignedTo' />" +                             "<FieldRef Name='Title' />" +                             "<FieldRef Name='StartDate' />" +                             "<FieldRef Name='EndDate' />" +                             "<FieldRef Name='Status' />" +                             "</ViewFields>",                   CAMLQuery: "<Query><Where><And><IsNotNull><FieldRef Name='AssignedTo'/></IsNotNull><Neq><FieldRef Name='Status'/><Value Type='Text'>Completed</Value></Neq></And></Where></Query>",                     completefunc: function (xData, Status) {                         var aDataSet = new Array();                         //loop through each returned Task                         $(xData.responseXML).find("[nodeName=z:row]").each(function() {                             //store the multi-select string of who task is assigned to                             var assignedToString = $(this).attr("ows_AssignedTo");                             found = false;                            //loop through the persons name and all the groups they belong to                             for(var i=0; i<myGroups.length; i++) {                                 //if the person's name or group exists in the assigned To string                                 //then the task is assigned to them                                 if (assignedToString.indexOf(myGroups[i]) >= 0){                                     found = true;                                     break;                                 }                             }                            //if the Task belongs to this person then store or display it                             //(I'm storing it in an array)                             if (found){                                 var thisName = $(this).attr("ows_Title");                                 var thisStartDate = $(this).attr("ows_StartDate");                                 var thisEndDate = $(this).attr("ows_EndDate");                                 var thisStatus = $(this).attr("ows_Status");                                                                  var aDataRow=new Array(                                     thisName,                                     thisStartDate,                                     thisEndDate,                                     thisStatus);                                 aDataSet.push(aDataRow);                             }                          });                          SomeFunctionToDisplayData(aDataSet);                     }                 });       }    });  }); Final Thoughts So, there you have it. Take it and run with it. Make it something cool (and tell me how you did it). Another possible way to improve performance in this scenario is to use a DVWP to display the tasks and use jQuery and the “myGroups” array from this blog post to hide all those rows that don’t belong to the current user. I haven’t tried it, but it does move some of the processing off to the server (generating the view) so it may perform better.  As always, thanks for stopping by… hope you have a Merry Christmas…

    Read the article

  • Is there a better way to consume an ASP.NET Web API call in an MVC controller?

    - by davidisawesome
    In a new project I am creating for my work I am creating a fairly large ASP.NET Web API. The api will be in a separate visual studio solution that also contains all of my business logic and database interactions, Model classes as well. In the test application I am creating (which is asp.net mvc4), I want to be able to hit an api url I defined from the control and cast the return JSON to a Model class. The reason behind this is that I want to take advantage of strongly typing my views to a Model. This is all still in a proof of concept stage, so I have not done any performance testing on it, but I am curious if what I am doing is a good practice, or if I am crazy for even going down this route. Here is the code on the client controller: public class HomeController : Controller { protected string dashboardUrlBase = "http://localhost/webapi/api/StudentDashboard/"; public ActionResult Index() //This view is strongly typed against User { //testing against Joe Bob string adSAMName = "jBob"; WebClient client = new WebClient(); string url = dashboardUrlBase + "GetUserRecord?userName=" + adSAMName; //'User' is a Model class that I have defined. User result = JsonConvert.DeserializeObject<User>(client.DownloadString(url)); return View(result); } . . . } If I choose to go this route another thing to note is I am loading several partial views in this page (as I will also do in subsequent pages). The partial views are loaded via an $.ajax call that hits this controller and does basically the same thing as the code above: Instantiate a new WebClient Define the Url to hit Deserialize the result and cast it to a Model Class. So it is possible (and likely) I could be performing the same actions 4-5 times for a single page. Is there a better method to do this that will: Let me keep strongly typed views. Do my work on the server rather than on the client (this is just a preference since I can write C# faster than I can write javascript).

    Read the article

  • O'Reilly deals to April 5, 2012 14:00 PT on books on "where"

    - by TATWORTH
    At http://shop.oreilly.com/category/deals/where-conference.do, O'Reilly are offering a series of books on geo-location at 50% off until April 5, 2012 14:00 PT. HTML5 Geolocation Truly revolutionary: now you can write geolocation applications directly in the browser, rather than develop native apps for particular devices. This concise book demonstrates the W3C Geolocation API in action, with code and examples to help you build HTML5 apps using the "write once, deploy everywhere" model. Along the way, you get a crash course in geolocation, browser support, and ways to integrate the API with common geo tools like Google Maps. HTML5 Cookbook With scores of practical recipes you can use in your projects right away, this cookbook helps you gain hands-on experience with HTML5’s versatile collection of elements. You get clear solutions for handling issues with everything from markup semantics, web forms, and audio and video elements to related technologies such as geolocation and rich JavaScript APIs. Each informative recipe includes sample code and a detailed discussion on why and how the solution works. Perfect for intermediate to advanced web and mobile web developers, this handy book lets you choose the HTML5 features that work for you—and helps you experiment with the rest. HTML5 Applications HTML5 is not just a replacement for plugins. It also makes the Web a first-class development environment by giving JavaScript programmers a solid foundation for building industrial-strength applications. This practical guide takes you beyond simple site creation and shows you how to build self-contained HTML5 applications that can run on mobile devices and compete with desktop apps. You’ll learn powerful JavaScript tools for exploiting HTML5 elements, and discover new methods for working with data, such as offline storage and multi-threaded processing. Complete with code samples, this book is ideal for experienced JavaScript and mobile developers alike. There are also other books being offered at a discount at http://shop.oreilly.com/category/deals/where-conference.do

    Read the article

  • Wordpress blog website apache and IIS subdirectory

    - by Philippe
    The issue is this : Our company has a website hosted on an IIS server. I have recently been given the task to configure a WordPress server for an eventual WordPress blog so that our social media employee could test and see how it works. This was completed successfully and easily on a new server and on a WAMP configuration. The website was published as wordpress.domain.com and works fine. HOWEVER! I have now been requested to ensure that the soon-to-go-online blog would be accessible through the address domain.com/blog. Is there a way to modify the original company website and simple redirect the /blog to the Apache WordPress website? If not, is there a way to transfer the wordpress.domain.com on the IIS server hosting the main website and keep the configuration? Is there a better solution that I haven't thought about (probably)? If so, what would you all suggest?

    Read the article

  • 12.0.4.3 - Missing Battery Icon, auto Suspend not working, Keyboard shortcuts volume up/down no longer working

    - by Navraj
    Problems I am experiencing: Battery Icon not showing up unity bar (top right corner). Volume Up/Down/Mute not working. Bluetooth hot keys described above also not working.Brightness up/down keys on this keyboard no longer working (apple wireless keyboard) Laptop no longer suspends when lid is shut. I have to go to 'power' button on top right corner and click on 'Suspend' All was working flawlessly until I did the following: I have recently upgraded to Nvidia propriety driver version 319 {version recommended}. Installed Xscreensaver and then removed it and went back to default screensaver. Done a system update (1st since installing) and now currently running: Linux 3.8.0-32-generic #47~precise1-Ubuntu SMP Wed Oct 2 16:19:35 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux NOTE: Base system was ubuntu 12.04.3 installed from ISO however lsb_release reports "No LSB modules are available" 4.installed psensor. I have check power setting (via Settings) and power setting via dconf-editor and set to recommended settings as described in posts detailing solution to this problem. - I have disabled 1) Nvdia settings at startup and 2) psensor at startup but this does not help. I am using an HP DV7 with 2GB Nvidia card. Not using any fancy graphics features. Recommendations? Thanks.

    Read the article

  • Can't use nvidia card/driver on optimus notebook

    - by Mr. Pixel
    I installed (once again) the latest official nvidia driver for my GT540m on Ubuntu 11.10. Even though everything seems OK with my xorg.conf file (I've manually added BusID "PCI:1:0:0", since lspci shows 01:00.0 for my GPU). The problem is, when I use the xorg.conf file generated by Xorg -configure, Xorg automatically loads the Intel GPU. So I removed everything that was not related to my nvidia card, basically leaving my xorg.conf with one screen and one device (with the nvidia driver and the above-mentioned BusID), and Xorg fails to start. The log says something like "Devices on GT540m [newline] none" And a few lines later, something like "NVIDIA(0) found a screen, but have no device for it". When I don't set the BusID, it doesn't seem to detect my card either. Thank you for any suggestion. PS: If possible, I'd like to avoid bumblebee or any similar "hybrid graphics" solution, last time I tried I ended up reinstalling Ubuntu. Edit: Allow me to clarify the problem. I have a notebook with a GT540m graphics card, and an integrated intel gpu. I want to use the graphics card with full hardware acceleration and its official driver, as I do under windows.

    Read the article

  • Standards Matter: The Battle For Interoperability Continues

    - by michael.rowell
    Great Article, although it is a little dated at this point. Information Week Article Standards Matter: The Battle for Interoperability goes on Summary If you're guilty of relegating standards support to a "nice to have" feature rather than a requirement, you're part of the problem. If you want products to interoperate, be prepared to walk away if a vendor can't prove compliance. Don't be brushed off with promises of standards support "on the road map." The alternative is vendor lock-in and higher costs, including the cost of maintaining systems that don't work together. Standards bodies are imperfect and must do better. The alternative: splintered networks and broken promises. The point: "The secret sauce to a successful 'working standard' isn't necessarily IETF or another longstanding body," says Jonathan Feldman, director of IT services for the city of Asheville, N.C., and an InformationWeek Analytics contributor. "Rather, an earnest and honest effort by a group that has governance outside of a single corporation's control is what's important." In order to have true interoperability vendors as well as customers must be actively engaged in the standards process. Vendors must be willing to truly work together and not be protecting an existing product. Customers must also be willing to truly to work together and not be demanding a solution that only meets their needs but instead meets the needs of all participants. Ultimately, customers must be willing to reward vendor compliance by requiring compliance in products and services that they purchase and deploy. Managers that deploy systems without compliance to standards are only hurting themselves. Standards do matter. When developed openly and deployed compliantly standards deliver interoperability which provides solid business value.

    Read the article

  • Web hosting providers for businesses (with offsite backups, disaster recovery options, etc.) [closed]

    - by Harry Muscle
    Possible Duplicate: How to find web hosting that meets my requirements? I'm wondering if anyone can point me in the direction of a couple of web hosting providers that are geared towards businesses. By this I mean providers that make it easy to create daily off site backups, are aware that websites require disaster recovery options and have these in place or are able to assist with them, etc. We currently have about a dozen sites with various providers, however, I've been asked to consolidate all of these into one provider and create a full disaster recovery plan. Unfortunately it seems like most providers are geared towards average users that don't require all these extra bells and whistles that businesses need. For example, HostGator, which is a very popular and well reviewed provider, doesn't even allow you to schedule full backups, they have to be manually requested via cPanel and then downloaded once available. If anyone can point out a couple companies that might be able to help with these sorts of things that would be much appreciated. Thanks, Harry P.S. I should also add that we are hoping to stay away from having to manage our own server, we're hoping for a fully managed solution like what HostGator would offer for example.

    Read the article

  • Switch or a Dictionary when assigning to new object

    - by KChaloux
    Recently, I've come to prefer mapping 1-1 relationships using Dictionaries instead of Switch statements. I find it to be a little faster to write and easier to mentally process. Unfortunately, when mapping to a new instance of an object, I don't want to define it like this: var fooDict = new Dictionary<int, IBigObject>() { { 0, new Foo() }, // Creates an instance of Foo { 1, new Bar() }, // Creates an instance of Bar { 2, new Baz() } // Creates an instance of Baz } var quux = fooDict[0]; // quux references Foo Given that construct, I've wasted CPU cycles and memory creating 3 objects, doing whatever their constructors might contain, and only ended up using one of them. I also believe that mapping other objects to fooDict[0] in this case will cause them to reference the same thing, rather than creating a new instance of Foo as intended. A solution would be to use a lambda instead: var fooDict = new Dictionary<int, Func<IBigObject>>() { { 0, () => new Foo() }, // Returns a new instance of Foo when invoked { 1, () => new Bar() }, // Ditto Bar { 2, () => new Baz() } // Ditto Baz } var quux = fooDict[0](); // equivalent to saying 'var quux = new Foo();' Is this getting to a point where it's too confusing? It's easy to miss that () on the end. Or is mapping to a function/expression a fairly common practice? The alternative would be to use a switch: IBigObject quux; switch(someInt) { case 0: quux = new Foo(); break; case 1: quux = new Bar(); break; case 2: quux = new Baz(); break; } Which invocation is more acceptable? Dictionary, for faster lookups and fewer keywords (case and break) Switch: More commonly found in code, doesn't require the use of a Func< object for indirection.

    Read the article

  • Does immutability entirely eliminate the need for locks in multi-processor programming?

    - by GlenPeterson
    Part 1 Clearly Immutability minimizes the need for locks in multi-processor programming, but does it eliminate that need, or are there instances where immutability alone is not enough? It seems to me that you can only defer processing and encapsulate state so long before most programs have to actually DO something. If a program performs actions on multiple processors, something needs to collect and aggregate the results. All this involves multi-process communication before, after, and possibly during some transformations. The start and end state of the machines are different. Can this always be done with no locks just by throwing out each object and creating a new one instead of changing the original (a crude view of immutability)? What cases still require locking? I'm interested in both the theoretical/academic answer and the practical/real-world answer. I know a lot of functional programmers like to talk about "no side effect" but in the "real world" everything has a side effect. Every processor click takes time and electricity and machine resources away from other processes. So I understand that there may be more than one perspective to answer this question from. If immutability is safe, given certain bounds or assumptions, I want to know what the borders of the "safety zone" are exactly. Some examples of possible boundaries: I/O Exceptions/errors Interfaces with programs written in other languages Interfaces with other machines (physical, virtual, or theoretical) Special thanks to @JimmaHoffa for his comment which started this question! Part 2 Multi-processor programming is often used as an optimization technique - to make some code run faster. When is it faster to use locks vs. immutable objects? Given the limits set out in Amdahl's Law, when can you achieve better over-all performance (with or without the garbage collector taken into account) with immutable objects vs. locking mutable ones? Summary I'm combining these two questions into one to try to get at where the bounding box is for Immutability as a solution to threading problems.

    Read the article

  • How to ignore query parameters in web cache?

    - by eduardocereto
    Google Analytics use some query parameters to identify campaigns and to do cookie control. This is all handled by javascript code. Take a look at the following example: http://www.example.com/?utm_source=newsletter&utm_medium=email&utm_ter m=October%2B2008&utm_campaign=promotion This will set cookies via JavaScript with the right campaign origin. This query parameters can have multiple and sometimes random values. Since they are used as cache hash keys the cache performance is heavily degraded in some scenarios. I suppose there's a not so hard configuration on cache servers to just ignore all query parameters or specific query parameters. Am I right? Does anyone know how hard is it in popular web cache solutions, to create ? I'm not interested in a specific web cache solution. It would be great to hear about the one you use.

    Read the article

  • Dev Lead Job opening on my team

    My product unit (Parallel Developer Tools) is hiring a developer lead here in Redmond. This position is specifically on the debugger feature team that I "Program Manage".So, if you have what it takes and don't mind working with me every single day, click on the link below to read more and apply. You can also send me your resume and I'll make sure it gets to the right place and that you get a prompt response.There is a very long job description on the Microsoft careers site under job id 707388.Here is an excerpt from the middle (emphasis mine):"...We are in search of a talented and innovative senior lead software design engineer to own development of the debugging tools for data parallelism (including GP-GPU) and HPC Clusters being built by our team.To be successful, you need to be able to guide careers, design and architect well, communicate and share the best development practices, collaborate with your peers, contribute to the vision, and code significant portions of the solution. We want to hear from you if you're passionate about making your mark in the parallel development space, improving people, and building world-class tools."Responsibilities include:Managing a team of senior and junior developersDesign and coding high-quality software..."For the full background story, requirements, qualifications and responsibilities please visit the official page. Comments about this post welcome at the original blog.

    Read the article

  • Software Design Idea for multi tier architecture

    - by Preyash
    I am currently investigating multi tier architecture design for a web based application in MVC3. I already have an architecture but not sure if its the best I can do in terms of extendability and performance. The current architecure has following components DataTier (Contains EF POCO objects) DomainModel (Contains Domain related objects) Global (Among other common things it contains Repository objects for CRUD to DB) Business Layer (Business Logic and Interaction between Data and Client and CRUD using repository) Web(Client) (which talks to DomainModel and Business but also have its own ViewModels for Create and Edit Views for e.g.) Note: I am using ValueInjector for convering one type of entity to another. (which is proving an overhead in this desing. I really dont like over doing this.) My question is am I having too many tiers in the above architecure? Do I really need domain model? (I think I do when I exposes my Business Logic via WCF to external clients). What is happening is that for a simple database insert it (1) create ViewModel (2) Convert ViewModel to DomainModel for Business to understand (3) Business Convert it to DataModel for Repository and then data comes back in the same order. Few things to consider, I am not looking for a perfect architecure solution as it does not exits. I am looking for something that is scalable. It should resuable (for e.g. using design patterns ,interfaces, inheritance etc.) Each Layers should be easily testable. Any suggestions or comments is much appriciated. Thanks,

    Read the article

  • Docking station power adapter is not recognized by my Dell notebook

    - by Soner Gönül
    At this morning, my laptop (Dell Latitude e6410) gave me this error. I didn't do anything, I didn't change anything. And I got this docking station just 2 month ago. I made a little research on the internet but I couldn't find read solution for this situation. Now my docking station is not charging to my laptop. I'm using Windows 7. What should I do in this situation ? Your docking station power adapter is not recognized by your Dell notebook. As a result, your power adapter may not provide sufficient power to run the system, your battery will not charge, your system will run slowly. Please insert a 65 watt Dell approved power adapter.

    Read the article

  • New CAM Editor v2.3 with Open-XDX for Open Data APIs

    - by drrwebber
    Creating actual working XML exchanges, loading data from data stores, generating XML, testing, integrating with web services and then deployment delivery takes a lot of coding and effort. Then writing the documentation, models, schema and doing naming and design rule (NDR) checks and packaging all this together (such as for NIEM IEPD use). What if there was a tool that helped you do all that easily and simply? Welcome to the new Open-XDX and the CAM Editor! Open-XDX uses code-free techniques in combination with CAM templates and visual drag and drop to rapidly design your XML exchange. Then Open-XDX will automatically generate all the SQL for you, read the database data, generate and populate the valid output XML, and filter with parameters. To complete the processing solution Open-XDX works with web services and JDBC database connections as a callable module that can be deployed plug and play with your middleware stack, all with just a few lines of Java code (about 5 actually). You can build either Query/Response or Publish/Subscribe services from existing data stores to XML literally in minutes. To see a demonstration of using Open-XDX, a MySQL data store and integrating with Oracle Web Logic server please see this short few minutes video - http://youtube.com/user/TheCameditor There is also a Quick Guide available that provides more technical insights along with a sample pack download of templates and SQL that you can try for yourself. Head on over to our project resource site to learn more, download the latest CAM Editor and see links to all the resources and materials. We look forward to seeing how the developer community is able to jump start information sharing initiatives using this new innovative approach.

    Read the article

< Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >