Search Results

Search found 62161 results on 2487 pages for 'set difference'.

Page 705/2487 | < Previous Page | 701 702 703 704 705 706 707 708 709 710 711 712  | Next Page >

  • DevDays ‘00 The Netherlands day #1

    - by erwin21
    First day of DevDays 2010, I was looking forward to DevDays to see all the new things like VS2010, .NET4.0, MVC2. The lineup for this year is again better than the year before, there are 100+ session of all kind of topics like Cloud, Database, Mobile, SharePoint, User experience, Visual Studio, Web. The first session of the day was a keynote by Anders Hejlsberg he talked about the history and future of programming languages. He gave his view about trends and influences in programming languages today and in the future. The second talk that i followed was from the famous Scott Hanselman, he talked about the basics of ASP.NET MVC 2, although it was a 300 level session, it was more like a level 100 session, but it was mentioned by Scott at the beginning. Although it was interesting to see all the basic things about MVC like the controllers, actions, routes, views, models etc. After the lunch the third talk for me was about moving ASP.NET webform applications to MVC from Fritz Onion. In this session he changed an example webform application part by part to a MVC application. He gave some interesting tips and tricks and showed how to solve some issues that occur while converting. Next and the fourth talk was about the difference between LINQ to SQL and  the ADO.NET  Entity Framework from Kurt Claeys. He gave a good understanding about this two options, the demos where in LINQ to SQL and the Entity Framework, the goal was to get a good understanding when and where to use both options. The last talk about this day was also from Scott Hanselman, he goes deeper into the features of ASP.NET MVC 2 and gave some interesting tips, the ninja black belt tips. He gave some tips about the tooling, the new MVC 2 html helper methods, other view engines (like NHaml, spark),T4 templating. With this tips we can be more productive and create web applications better and faster. It was a long and interesting day, I am looking forward to day #2.

    Read the article

  • External Monitors shut off when Laptop Lid closes

    - by John Lanz
    I have researched the solution... gconftool-2 --type string --set /apps/gnome-power-manager/buttons/lid_ac "nothing" does not fix it. I have two external monitors and when I close my lid the settings are reset and the laptop's monitor is set to the default. Thanks! gsettings list-recursively org.gnome.settings-daemon.plugins.power org.gnome.settings-daemon.plugins.power active true org.gnome.settings-daemon.plugins.power button-hibernate 'nothing' org.gnome.settings-daemon.plugins.power button-power 'nothing' org.gnome.settings-daemon.plugins.power button-sleep 'nothing' org.gnome.settings-daemon.plugins.power button-suspend 'nothing' org.gnome.settings-daemon.plugins.power critical-battery-action 'suspend' org.gnome.settings-daemon.plugins.power idle-brightness 30 org.gnome.settings-daemon.plugins.power idle-dim-ac false org.gnome.settings-daemon.plugins.power idle-dim-battery true org.gnome.settings-daemon.plugins.power idle-dim-time 10 org.gnome.settings-daemon.plugins.power lid-close-ac-action 'nothing' org.gnome.settings-daemon.plugins.power lid-close-battery-action 'nothing' org.gnome.settings-daemon.plugins.power notify-perhaps-recall true org.gnome.settings-daemon.plugins.power percentage-action 2 org.gnome.settings-daemon.plugins.power percentage-critical 3 org.gnome.settings-daemon.plugins.power percentage-low 10 org.gnome.settings-daemon.plugins.power priority 1 org.gnome.settings-daemon.plugins.power sleep-display-ac 600 org.gnome.settings-daemon.plugins.power sleep-display-battery 600 org.gnome.settings-daemon.plugins.power sleep-inactive-ac false org.gnome.settings-daemon.plugins.power sleep-inactive-ac-timeout 0 org.gnome.settings-daemon.plugins.power sleep-inactive-ac-type 'suspend' org.gnome.settings-daemon.plugins.power sleep-inactive-battery true org.gnome.settings-daemon.plugins.power sleep-inactive-battery-timeout 0 org.gnome.settings-daemon.plugins.power sleep-inactive-battery-type 'suspend' org.gnome.settings-daemon.plugins.power time-action 120 org.gnome.settings-daemon.plugins.power time-critical 300 org.gnome.settings-daemon.plugins.power time-low 1200 org.gnome.settings-daemon.plugins.power use-time-for-policy true

    Read the article

  • Using Oracle Data in the Business Rules Engine

    - by Christopher House
    Yesterday I started working on some new functionality that I had planned to implement using the Business Rules Engine.  As I got further into it, I realized that some of my rules were going to need to reference some data that resides in an Oracle database.  I knew the Business Rules Composer supports using DataConnections and TypedDataTables, but I’d never used this functionality myself, so I wasn’t so sure how it would work with Oracle.  As it turns out, it’s very do-able, there’s just little hoop you need to jump through. I fired up BRC and my suspicions were quickly confirmed.  BRC only recognizes SQL Server databases when it comes to editing rules.  Not letting that deter me, I decided to see if I could “trick” BRE into using Oracle data. On my local SQL server, I created a new database and in that database, created a table that matched the schema of the table I wanted to use in the Oracle database.  I then set about creating my rules, referencing the new SQL Server database everywhere I wanted to use Oracle data.  Finally, I created a new class library and added a class that implements Microsoft.RuleEngine.IFactRetriever.  In that class, I added the necessary code to get a DataSet from the Oracle server, wrap it in a TypedDataTable and assert it into the rule engine.  It’s worth pointing out that in my IFactRetriever class, I made sure to set my DataSet name to the name of the database I’d referenced in the BRC and the DataTable’s name to the name of the table that I’d referenced in the BRC. After gac’ing the new class library and deploying my policy, I tested and everything worked as expected.

    Read the article

  • SQL SERVER – Solution – Challenge – Puzzle – Usage of FAST Hint

    - by pinaldave
    Earlier I had posted quick puzzle and I had received wonderful response to the same from Brad Schulz. Today we will go over the solution. The puzzle was posted here: SQL SERVER – Challenge – Puzzle – Usage of FAST Hint The question was in what condition the hint FAST will be useful. In the response to this puzzle blog post here is what SQL Server Expert Brad Schulz has pointed me to his blog post where he explain how FAST hint can be useful. I strongly recommend to read his blog post over here. With the permission of the Brad, I am reproducing following queries here. He has come up with example where FAST hint improves the performance. USE AdventureWorks GO DECLARE @DesiredDateAtMidnight DATETIME = '20010709' DECLARE @NextDateAtMidnight DATETIME = DATEADD(DAY,1,@DesiredDateAtMidnight) -- Query without FAST SELECT OrderID=h.SalesOrderID ,h.OrderDate ,h.TerritoryID ,TerritoryName=t.Name ,c.CardType ,c.CardNumber ,CardExpire=RIGHT(STR(100+ExpMonth),2)+'/'+STR(ExpYear,4) ,h.TotalDue FROM Sales.SalesOrderHeader h LEFT JOIN Sales.SalesTerritory t ON h.TerritoryID=t.TerritoryID LEFT JOIN Sales.CreditCard c ON h.CreditCardID=c.CreditCardID WHERE OrderDate>=@DesiredDateAtMidnight AND OrderDate<@NextDateAtMidnight ORDER BY h.SalesOrderID; -- Query with FAST(10) SELECT OrderID=h.SalesOrderID ,h.OrderDate ,h.TerritoryID ,TerritoryName=t.Name ,c.CardType ,c.CardNumber ,CardExpire=RIGHT(STR(100+ExpMonth),2)+'/'+STR(ExpYear,4) ,h.TotalDue FROM Sales.SalesOrderHeader h LEFT JOIN Sales.SalesTerritory t ON h.TerritoryID=t.TerritoryID LEFT JOIN Sales.CreditCard c ON h.CreditCardID=c.CreditCardID WHERE OrderDate>=@DesiredDateAtMidnight AND OrderDate<@NextDateAtMidnight ORDER BY h.SalesOrderID OPTION(FAST 10) Now when you check the execution plan for the same, you will find following visible difference. You will find query with FAST returns results with much lower cost. Thank you Brad for excellent post and teaching us something. I request all of you to read original blog post written by Brad for much more information. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Ten - oh, wait, eleven - Eleven things you should know about the ASP.NET Fall 2012 Update

    - by Jon Galloway
    Today, just a little over two months after the big ASP.NET 4.5 / ASP.NET MVC 4 / ASP.NET Web API / Visual Studio 2012 / Web Matrix 2 release, the first preview of the ASP.NET Fall 2012 Update is out. Here's what you need to know: There are no new framework bits in this release - there's no change or update to ASP.NET Core, ASP.NET MVC or Web Forms features. This means that you can start using it without any updates to your server, upgrade concerns, etc. This update is really an update to the project templates and Visual Studio tooling, conceptually similar to the ASP.NET MVC 3 Tools Update. It's a relatively lightweight install. It's a 41MB download. I've installed it many times and usually takes 5-7 minutes; it's never required a reboot. It adds some new project templates to ASP.NET MVC: Facebook Application and Single Page Application templates. It adds a lot of cool enhancements to ASP.NET Web API. It adds some tooling that makes it easy to take advantage of features like SignalR, Friendly URLs, and Windows Azure Authentication. Most of the new features are installed via NuGet packages. Since ASP.NET is open source, nightly NuGet packages are available, and the roadmap is published, most of this has really been publicly available for a while. The official name of this drop is the ASP.NET Fall 2012 Update BUILD Prerelease. Please do not attempt to say that ten times fast. While the EULA doesn't prohibit it, it WILL legally change your first name to Scott. As with all new releases, you can find out everything you need to know about the Fall Update at http://asp.net/vnext (especially the release notes!) I'm going to be showing all of this off, assisted by special guest code monkey Scott Hanselman, this Friday at BUILD: Bleeding edge ASP.NET: See what is next for MVC, Web API, SignalR and more… (and I've heard it will be livestreamed). Let's look at some of those things in more detail. No new bits ASP.NET 4.5, MVC 4 and Web API have a lot of great core features. I see the goal of this update release as making it easier to put those features to use to solve some useful scenarios by taking advantage of NuGet packages and template code. If you create a new ASP.NET MVC application using one of the new templates, you'll see that it's using the ASP.NET MVC 4 RTM NuGet package (4.0.20710.0): This means you can install and use the Fall Update without any impact on your existing projects and no worries about upgrading or compatibility. New Facebook Application Template ASP.NET MVC 4 (and ASP.NET 4.5 Web Forms) included the ability to authenticate your users via OAuth and OpenID, so you could let users log in to your site using a Facebook account. One of the new changes in the Fall Update is a new template that makes it really easy to create full Facebook applications. You could create Facebook application in ASP.NET already, you'd just need to go through a few steps: Search around to find a good Facebook NuGet package, like the Facebook C# SDK (written by my friend Nathan Totten and some other Facebook SDK brainiacs). Read the Facebook developer documentation to figure out how to authenticate and integrate with them. Write some code, debug it and repeat until you got something working. Get started with the application you'd originally wanted to write. What this template does for you: eliminate steps 1-3. Erik Porter, Nathan and some other experts built out the Facebook Application template so it automatically pulls in and configures the Facebook NuGet package and makes it really easy to take advantage of it in an ASP.NET MVC application. One great example is the the way you access a Facebook user's information. Take a look at the following code in a File / New / MVC / Facebook Application site. First, the Home Controller Index action: [FacebookAuthorize(Permissions = "email")] public ActionResult Index(MyAppUser user, FacebookObjectList<MyAppUserFriend> userFriends) { ViewBag.Message = "Modify this template to jump-start your Facebook application using ASP.NET MVC."; ViewBag.User = user; ViewBag.Friends = userFriends.Take(5); return View(); } First, notice that there's a FacebookAuthorize attribute which requires the user is authenticated via Facebook and requires permissions to access their e-mail address. It binds to two things: a custom MyAppUser object and a list of friends. Let's look at the MyAppUser code: using Microsoft.AspNet.Mvc.Facebook.Attributes; using Microsoft.AspNet.Mvc.Facebook.Models; // Add any fields you want to be saved for each user and specify the field name in the JSON coming back from Facebook // https://developers.facebook.com/docs/reference/api/user/ namespace MvcApplication3.Models { public class MyAppUser : FacebookUser { public string Name { get; set; } [FacebookField(FieldName = "picture", JsonField = "picture.data.url")] public string PictureUrl { get; set; } public string Email { get; set; } } } You can add in other custom fields if you want, but you can also just bind to a FacebookUser and it will automatically pull in the available fields. You can even just bind directly to a FacebookUser and check for what's available in debug mode, which makes it really easy to explore. For more information and some walkthroughs on creating Facebook applications, see: Deploying your first Facebook App on Azure using ASP.NET MVC Facebook Template (Yao Huang Lin) Facebook Application Template Tutorial (Erik Porter) Single Page Application template Early releases of ASP.NET MVC 4 included a Single Page Application template, but it was removed for the official release. There was a lot of interest in it, but it was kind of complex, as it handled features for things like data management. The new Single Page Application template that ships with the Fall Update is more lightweight. It uses Knockout.js on the client and ASP.NET Web API on the server, and it includes a sample application that shows how they all work together. I think the real benefit of this application is that it shows a good pattern for using ASP.NET Web API and Knockout.js. For instance, it's easy to end up with a mess of JavaScript when you're building out a client-side application. This template uses three separate JavaScript files (delivered via a Bundle, of course): todoList.js - this is where the main client-side logic lives todoList.dataAccess.js - this defines how the client-side application interacts with the back-end services todoList.bindings.js - this is where you set up events and overrides for the Knockout bindings - for instance, hooking up jQuery validation and defining some client-side events This is a fun one to play with, because you can just create a new Single Page Application and hit F5. Quick, easy install (with one gotcha) One of the cool engineering changes for this release is a big update to the installer to make it more lightweight and efficient. I've been running nightly builds of this for a few weeks to prep for my BUILD demos, and the install has been really quick and easy to use. The install takes about 5 minutes, has never required a reboot for me, and the uninstall is just as simple. There's one gotcha, though. In this preview release, you may hit an issue that will require you to uninstall and re-install the NuGet VSIX package. The problem comes up when you create a new MVC application and see this dialog: The solution, as explained in the release notes, is to uninstall and re-install the NuGet VSIX package: Start Visual Studio 2012 as an Administrator Go to Tools->Extensions and Updates and uninstall NuGet. Close Visual Studio Navigate to the ASP.NET Fall 2012 Update installation folder: For Visual Studio 2012: Program Files\Microsoft ASP.NET\ASP.NET Web Stack\Visual Studio 2012 For Visual Studio 2012 Express for Web: Program Files\Microsoft ASP.NET\ASP.NET Web Stack\Visual Studio Express 2012 for Web Double click on the NuGet.Tools.vsix to reinstall NuGet This took me under a minute to do, and I was up and running. ASP.NET Web API Update Extravaganza! Uh, the Web API team is out of hand. They added a ton of new stuff: OData support, Tracing, and API Help Page generation. OData support Some people like OData. Some people start twitching when you mention it. If you're in the first group, this is for you. You can add a [Queryable] attribute to an API that returns an IQueryable<Whatever> and you get OData query support from your clients. Then, without any extra changes to your client or server code, your clients can send filters like this: /Suppliers?$filter=Name eq ‘Microsoft’ For more information about OData support in ASP.NET Web API, see Alex James' mega-post about it: OData support in ASP.NET Web API ASP.NET Web API Tracing Tracing makes it really easy to leverage the .NET Tracing system from within your ASP.NET Web API's. If you look at the \App_Start\WebApiConfig.cs file in new ASP.NET Web API project, you'll see a call to TraceConfig.Register(config). That calls into some code in the new \App_Start\TraceConfig.cs file: public static void Register(HttpConfiguration configuration) { if (configuration == null) { throw new ArgumentNullException("configuration"); } SystemDiagnosticsTraceWriter traceWriter = new SystemDiagnosticsTraceWriter() { MinimumLevel = TraceLevel.Info, IsVerbose = false }; configuration.Services.Replace(typeof(ITraceWriter), traceWriter); } As you can see, this is using the standard trace system, so you can extend it to any other trace listeners you'd like. To see how it works with the built in diagnostics trace writer, just run the application call some API's, and look at the Visual Studio Output window: iisexpress.exe Information: 0 : Request, Method=GET, Url=http://localhost:11147/api/Values, Message='http://localhost:11147/api/Values' iisexpress.exe Information: 0 : Message='Values', Operation=DefaultHttpControllerSelector.SelectController iisexpress.exe Information: 0 : Message='WebAPI.Controllers.ValuesController', Operation=DefaultHttpControllerActivator.Create iisexpress.exe Information: 0 : Message='WebAPI.Controllers.ValuesController', Operation=HttpControllerDescriptor.CreateController iisexpress.exe Information: 0 : Message='Selected action 'Get()'', Operation=ApiControllerActionSelector.SelectAction iisexpress.exe Information: 0 : Operation=HttpActionBinding.ExecuteBindingAsync iisexpress.exe Information: 0 : Operation=QueryableAttribute.ActionExecuting iisexpress.exe Information: 0 : Message='Action returned 'System.String[]'', Operation=ReflectedHttpActionDescriptor.ExecuteAsync iisexpress.exe Information: 0 : Message='Will use same 'JsonMediaTypeFormatter' formatter', Operation=JsonMediaTypeFormatter.GetPerRequestFormatterInstance iisexpress.exe Information: 0 : Message='Selected formatter='JsonMediaTypeFormatter', content-type='application/json; charset=utf-8'', Operation=DefaultContentNegotiator.Negotiate iisexpress.exe Information: 0 : Operation=ApiControllerActionInvoker.InvokeActionAsync, Status=200 (OK) iisexpress.exe Information: 0 : Operation=QueryableAttribute.ActionExecuted, Status=200 (OK) iisexpress.exe Information: 0 : Operation=ValuesController.ExecuteAsync, Status=200 (OK) iisexpress.exe Information: 0 : Response, Status=200 (OK), Method=GET, Url=http://localhost:11147/api/Values, Message='Content-type='application/json; charset=utf-8', content-length=unknown' iisexpress.exe Information: 0 : Operation=JsonMediaTypeFormatter.WriteToStreamAsync iisexpress.exe Information: 0 : Operation=ValuesController.Dispose API Help Page When you create a new ASP.NET Web API project, you'll see an API link in the header: Clicking the API link shows generated help documentation for your ASP.NET Web API controllers: And clicking on any of those APIs shows specific information: What's great is that this information is dynamically generated, so if you add your own new APIs it will automatically show useful and up to date help. This system is also completely extensible, so you can generate documentation in other formats or customize the HTML help as much as you'd like. The Help generation code is all included in an ASP.NET MVC Area: SignalR SignalR is a really slick open source project that was started by some ASP.NET team members in their spare time to add real-time communications capabilities to ASP.NET - and .NET applications in general. It allows you to handle long running communications channels between your server and multiple connected clients using the best communications channel they can both support - websockets if available, falling back all the way to old technologies like long polling if necessary for old browsers. SignalR remains an open source project, but now it's being included in ASP.NET (also open source, hooray!). That means there's real, official ASP.NET engineering work being put into SignalR, and it's even easier to use in an ASP.NET application. Now in any ASP.NET project type, you can right-click / Add / New Item... SignalR Hub or Persistent Connection. And much more... There's quite a bit more. You can find more info at http://asp.net/vnext, and we'll be adding more content as fast as we can. Watch my BUILD talk to see as I demonstrate these and other features in the ASP.NET Fall 2012 Update, as well as some other even futurey-er stuff!

    Read the article

  • ASP.NET MVC Postbacks and HtmlHelper Controls ignoring Model Changes

    - by Rick Strahl
    So here's a binding behavior in ASP.NET MVC that I didn't really get until today: HtmlHelpers controls (like .TextBoxFor() etc.) don't bind to model values on Postback, but rather get their value directly out of the POST buffer from ModelState. Effectively it looks like you can't change the display value of a control via model value updates on a Postback operation. To demonstrate here's an example. I have a small section in a document where I display an editable email address: This is what the form displays on a GET operation and as expected I get the email value displayed in both the textbox and plain value display below, which reflects the value in the mode. I added a plain text value to demonstrate the model value compared to what's rendered in the textbox. The relevant markup is the email address which needs to be manipulated via the model in the Controller code. Here's the Razor markup: <div class="fieldcontainer"> <label> Email: &nbsp; <small>(username and <a href="http://gravatar.com">Gravatar</a> image)</small> </label> <div> @Html.TextBoxFor( mod=> mod.User.Email, new {type="email",@class="inputfield"}) @Model.User.Email </div> </div>   So, I have this form and the user can change their email address. On postback the Post controller code then asks the business layer whether the change is allowed. If it's not I want to reset the email address back to the old value which exists in the database and was previously store. The obvious thing to do would be to modify the model. Here's the Controller logic block that deals with that:// did user change email? if (!string.IsNullOrEmpty(oldEmail) && user.Email != oldEmail) { if (userBus.DoesEmailExist(user.Email)) { userBus.ValidationErrors.Add("New email address exists already. Please…"); user.Email = oldEmail; } else // allow email change but require verification by forcing a login user.IsVerified = false; }… model.user = user; return View(model); The logic is straight forward - if the new email address is not valid because it already exists I don't want to display the new email address the user entered, but rather the old one. To do this I change the value on the model which effectively does this:model.user.Email = oldEmail; return View(model); So when I press the Save button after entering in my new email address ([email protected]) here's what comes back in the rendered view: Notice that the textbox value and the raw displayed model value are different. The TextBox displays the POST value, the raw value displays the actual model value which are different. This means that MVC renders the textbox value from the POST data rather than from the view data when an Http POST is active. Now I don't know about you but this is not the behavior I expected - initially. This behavior effectively means that I cannot modify the contents of the textbox from the Controller code if using HtmlHelpers for binding. Updating the model for display purposes in a POST has in effect - no effect. (Apr. 25, 2012 - edited the post heavily based on comments and more experimentation) What should the behavior be? After getting quite a few comments on this post I quickly realized that the behavior I described above is actually the behavior you'd want in 99% of the binding scenarios. You do want to get the POST values back into your input controls at all times, so that the data displayed on a form for the user matches what they typed. So if an error occurs, the error doesn't mysteriously disappear getting replaced either with a default value or some value that you changed on the model on your own. Makes sense. Still it is a little non-obvious because the way you create the UI elements with MVC, it certainly looks like your are binding to the model value:@Html.TextBoxFor( mod=> mod.User.Email, new {type="email",@class="inputfield",required="required" }) and so unless one understands a little bit about how the model binder works this is easy to trip up. At least it was for me. Even though I'm telling the control which model value to bind to, that model value is only used initially on GET operations. After that ModelState/POST values provide the display value. Workarounds The default behavior should be fine for 99% of binding scenarios. But if you do need fix up values based on your model rather than the default POST values, there are a number of ways that you can work around this. Initially when I ran into this, I couldn't figure out how to set the value using code and so the simplest solution to me was simply to not use the MVC Html Helper for the specific control and explicitly bind the model via HTML markup and @Razor expression: <input type="text" name="User.Email" id="User_Email" value="@Model.User.Email" /> And this produces the right result. This is easy enough to create, but feels a little out of place when using the @Html helpers for everything else. As you can see by the difference in the name and id values, you also are forced to remember the naming conventions that MVC imposes in order for ModelBinding to work properly which is a pain to remember and set manually (name is the same as the property with . syntax, id replaces dots with underlines). Use the ModelState Some of my original confusion came because I didn't understand how the model binder works. The model binder basically maintains ModelState on a postback, which holds a value and binding errors for each of the Post back value submitted on the page that can be mapped to the model. In other words there's one ModelState entry for each bound property of the model. Each ModelState entry contains a value property that holds AttemptedValue and RawValue properties. The AttemptedValue is essentially the POST value retrieved from the form. The RawValue is the value that the model holds. When MVC binds controls like @Html.TextBoxFor() or @Html.TextBox(), it always binds values on a GET operation. On a POST operation however, it'll always used the AttemptedValue to display the control. MVC binds using the ModelState on a POST operation, not the model's value. So, if you want the behavior that I was expecting originally you can actually get it by clearing the ModelState in the controller code:ModelState.Clear(); This clears out all the captured ModelState values, and effectively binds to the model. Note this will produce very similar results - in fact if there are no binding errors you see exactly the same behavior as if binding from ModelState, because the model has been updated from the ModelState already and binding to the updated values most likely produces the same values you would get with POST back values. The big difference though is that any values that couldn't bind - like say putting a string into a numeric field - will now not display back the value the user typed, but the default field value or whatever you changed the model value to. This is the behavior I was actually expecting previously. But - clearing out all values might be a bit heavy handed. You might want to fix up one or two values in a model but rarely would you want the entire model to update from the model. So, you can also clear out individual values on an as needed basis:if (userBus.DoesEmailExist(user.Email)) { userBus.ValidationErrors.Add("New email address exists already. Please…"); user.Email = oldEmail; ModelState.Remove("User.Email"); } This allows you to remove a single value from the ModelState and effectively allows you to replace that value for display from the model. Why? While researching this I came across a post from Microsoft's Brad Wilson who describes the default binding behavior best in a forum post: The reason we use the posted value for editors rather than the model value is that the model may not be able to contain the value that the user typed. Imagine in your "int" editor the user had typed "dog". You want to display an error message which says "dog is not valid", and leave "dog" in the editor field. However, your model is an int: there's no way it can store "dog". So we keep the old value. If you don't want the old values in the editor, clear out the Model State. That's where the old value is stored and pulled from the HTML helpers. There you have it. It's not the most intuitive behavior, but in hindsight this behavior does make some sense even if at first glance it looks like you should be able to update values from the model. The solution of clearing ModelState works and is a reasonable one but you have to know about some of the innards of ModelState and how it actually works to figure that out.© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • web application or web portal? [closed]

    - by klo
    as title said differences between those 2. I read all the definition and some articles, but I need information about some other aspects. Here is the thing. We want to build a web site that will contain: site, database, uploads, numerous background services that would have to collect information from uploads and from some other sites, parse them etc...I doubt that there are portlets that fits our specific need so we will have to make them our self. So, questions: 1. Deployment ( and difference in cost if possible), is deploying portals much more easier then web app ( java or .net) 2. Server load. Does portal consume much of server power ( and can you strip portal of thing that you do not use) 3. Implementation and developing of portlets. Can u make all the things that you could have done in java or .net? 4. General thoughts of when to use portals and when classic web app. Tnx all in advence...

    Read the article

  • Oracle Fusion Applications: Changing the Game

    - by kellsey.ruppel(at)oracle.com
    Originally posted in the Oracle Profit Magazine, November 2010 Edition. When the order processing system red-flags a customer's credit status, the IT department doesn't get the customer's call. When a supplier misses a delivery date for a key automotive assembly, it's not the CIO who has to answer for the error. Knowledge workers (known in IT circles as "users") are on the front lines when an exception occurs in an established business process. They're also the ones who study sales trends to decide when to open a new store in an up-and-coming neighborhood, which products are most profitable, how employee skill sets are evolving, and which suppliers are most efficient. In short, knowledge workers are masters of business as unusual. Traditional enterprise resource planning (ERP) systems and other familiar enterprise applications excel at automating, managing, and executing standard business processes. These programs shine when everything goes as planned. Life gets even trickier when a traditional application needs to be extended with a new service or an extra step is added to a business process when new products are brought to market, divisions are merged, or companies are acquired. Monolithic applications often need the IT department to step in and make the necessary adjustments--incurring additional costs and delays. Until now. When Oracle unveiled the much-anticipated family of Oracle Fusion Applications at Oracle OpenWorld in September 2010, knowledge workers in particular had a lot to cheer about. Business users will soon have ready access to analytical information and collaboration tools in the context of what they are working on, so they can make better decisions when problems or opportunities arise. Additionally, the Oracle Fusion Applications platform will make it easy for business users to tweak processes, create new capabilities, and find information, often without the need for IT department assistance and while still following company guidelines. And IT leaders will be happy to hear about new deployment options, guided implementation and setup tools, and cost-saving management capabilities. Just as important, the underlying technologies in Oracle Fusion Applications will allow organizations to choose among their existing investments and next-generation enterprise applications so they can introduce innovations at a pace that makes the most business and financial sense. "Oracle Fusion Applications are architected so you don't have to do rip and replace," says Jim Hayes, managing director of the consulting firm Accenture. "That's very important for creating a business case that will get through the steering committee and be approved by the board. It shows you can drive value and make a difference in the near term." For these and other reasons, analysts and early adopters are calling Oracle Fusion Applications a game changer for enterprise customers. The differences become apparent in three key areas: the way we innovate, work, and adopt technology. Game Changer #1: New Standard for InnovationChange is a constant challenge for most businesses, whether the catalysts are market dynamics, new competition, or the ever-expanding regulatory environment. And, in an ongoing effort to differentiate, business leaders are constantly looking for new ways to do business, serve constituents, and bring new products and services to market. In addition, companies face significant costs to keep their applications up-to-date. For example, when a company adds new suppliers to a procurement system, the IT shop typically has to invest time, effort, and even consulting fees for custom integrations that allow various ERP systems to communicate with each other. Oracle Fusion Applications were built on Web services and a modular SOA foundation to ease customizations and integration activities among all applications--whether from Oracle or another vendor. Interfaces and updates written in ubiquitous Java, rather than a proprietary coding language, allow organizations to tap into existing in-house technical skills rather than seek expensive outside specialists. And with SOA, organizations can extend a feature set or integrate with other SOA environments by combining Web services such as "look up customer" into a new business process managed by the BPEL orchestration engine. Flexibility like this has long-term implications. "Because users capture these changes at a higher metadata layer, not in the application's code, changes and additions are protected even as new versions of Oracle Fusion Applications are released," says Steve Miranda, senior vice president of applications development at Oracle. "This is a much more sustainable approach because you don't incur costly customizations that prevent upgrades and other innovations." And changes are easier to make: if one change is made in the metadata, that change is automatically reflected throughout the application interface, business intelligence, business process, and business logic. Game Changer #2: New Standard for WorkBoosting productivity comes down to doing the basics right: running business processes more efficiently and managing exceptions more effectively, so users can accomplish more in the course of a day or spend more quality time with the most profitable customers. The fastest way to improve process efficiency is to reduce the number of steps it takes to execute common tasks, such as ordering office equipment from an internal procurement system. Oracle Fusion Applications will deliver a complete role-based user experience with business intelligence and collaboration capabilities provided in the context of the work at hand. "We created every Oracle Fusion Applications screen by asking 'What does the user need to know?' 'What does he or she need to do?' and 'Who do they need to work with to get the job done?'" Miranda explains. So when the sales department heads need new laptops, the self-service procurement screen will not only display a list of approved vendors and configurations, but also a running list of reviews by coworkers who recently purchased the various models. Embedded intelligence may also display prevailing delivery lead times based on actual order histories, not the generic shipping dates vendors may quote. The pervasive business intelligence serves many other business activities across all areas of the enterprise. For example, a manager considering whether to promote a direct report can see the person's employee profile, with a salary history, appraisal summaries, and a rundown of skills and training. This approach to business intelligence also has implications for supply chain management. "One of the challenges at Ingersoll Rand is lack of visibility in our supply chain," says Mike Macrie, global director of enterprise applications for global industrial firm Ingersoll Rand. "Oracle Fusion Applications are going to provide the embedded intelligence to give us that visibility and give us the ability to analyze those orders at any point in our supply chain." Oracle Fusion Applications will also create a "role-based user experience" that displays a work list of events that need attention, based on user job function. Role awareness guides users with daily lists of action items and exceptions. So a credit manager may see seven invoices with discounts that are about to expire or 12 suppliers that have been put on hold because credit memos are awaiting approval. Individualization extends to the search capabilities of Oracle Fusion Applications. The platform uses Web-style search screens powered by an Oracle enterprise search engine, with a security framework that filters search results so individuals will only see the internal information they're authorized to access. A further aid to productivity is Oracle Fusion Applications' integration with Web 2.0 collaboration and social networking resources for business environments. Hover-over text will reveal relevant contact information whenever the name of a person appears in an Oracle Fusion Application. Users can connect via an online chat, phone call, or instant message without leaving the main application, reducing the time required for an accounts payable staffer to resolve a mismatch between an invoiced charge and the service record, for example. Addresses of suppliers, customers, or partners will also initiate hover-over text to show contact details and Web-based maps. Finally, Oracle Fusion Applications will promote a new way of working with purpose-driven communities that can bring new efficiencies to everything from cultivating sales leads to managing new projects. As soon as a lead or project materializes, the applications will automatically gather relevant participants into an online community that shares member contact information, schedules, discussion forums, and Wiki pages. "Oracle Fusion Applications will allow us to take it to the next level with embedded Web 2.0 tools and the embedded analytics," says Steve Printz, CIO and vice president, supply chain management, at window-and-door manufacturer Pella. "[This] allows those employees today who are processing transactions to really contribute to the success of the company and become decision-makers." Game Changer #3: New Standard for Technology AdoptionAs IT becomes a dominant component of how businesses run and compete, organizations need to lower the cost of implementing applications and introducing new application features. In the past, rolling out new code often required creating a test bed system, moving beta code to a separate system for user feedback, and--once all the revisions were made--moving version one of the software onto production systems, where business users could finally get the needed new features. Oracle Fusion Applications will use a dedicated setup manager application to streamline this process. First, the setup manager will help scope out the project, querying users about their requirements. "From those questions and answers we determine the steps and the order of those steps that will enable that task," Miranda says. Next, system utilities will assign tasks to owners, track completion status, and monitor the overall status of a programming effort. Oracle Fusion Applications can then recommend Web services that allow users to migrate setup choices and steps across all the various deployments of the application. Those setup capabilities automate the migration from test systems to production systems, as well as between different business units that may be using the same application. "The self-service ability of the setup manager helps business users change setups with very little intervention from the IT team," says Ravi Kumar, vice president at IT services company Infosys. "That to me is a big difference from how we've viewed enterprise applications before." For additional flexibility, organizations will be able to adopt Oracle Fusion Applications modules in either of two modes: a single-instance alternative uses one database for all Oracle Fusion Applications, while a "pillar mode" creates separate databases to underpin each application. This means IT departments running any one of Oracle's applications or even third-party applications can plug Oracle Fusion Applications modules into their environment and see additional business value created on top of their existing systems. And Oracle Fusion Applications offer a hybrid approach to deployment. The applications are all software-as-a-service-ready, so customers can choose on-premises, public or private cloud, or a combination of these to suit their business needs. It's that combination of flexibility and a roadmap for the future that may be the biggest game changer of all. "The Oracle Fusion Applications architecture allows us to migrate our company at a pace that's consistent with our business strategy, whereas before we might have had to do it with a massive upgrade," says Macrie of Ingersoll Rand. "We're looking forward to that architecture to really give us more flexibility in how we migrate over time." For More InformationUser Input Key to the Success of Oracle Fusion ApplicationsTransforming Coexistence into Strategic ValueUnder the HoodOracle Fusion ApplicationsOracle Service-Oriented Architecture  

    Read the article

  • Apache2 on Raspbian: Multiviews is enabled but not working

    - by Christian L
    I recently moved webserver, from a ubuntuserver set up by my brother (I have sudo) to a rasbianserver set up by my self. On the other server multiviews worked out of the box, but on the raspbian it does not seem to work althoug it seems to be enabled out of the box there as well. What I am trying to do is to get it to find my.doma.in/mobile.php when I enter my.doma.in/mobile in the adress field. I am using the same available-site-file as I did before, the file looks as this: <VirtualHost *:80> ServerName my.doma.in ServerAdmin [email protected] DocumentRoot /home/christian/www/do <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /home/christian/www/do> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> From what I have read various places while googling this issue I found that the negotiation module had to be enabled so I tried to enable it. sudo a2enmod negotiation Giving me this result Module negotiation already enabled I have read through the /etc/apache2/apache2.conf and I did not find anything in particular that seemed to be helping me there, but please do ask if you think I should post it. Any ideas on how to solve this through getting Multiviews to work?

    Read the article

  • Do game-theoretic considerations stand in the way of this market-based game-mechanic achieving its goals?

    - by BerndBrot
    Mechanic The mechanic is called "market manipulation" and is supposed to work like this: Players can enter the London Stock Exchange (LSE) LSE displays the stock prices of 8 to 10 companies and derivatives. This number is relatively small to ensure that players will collide in their efforts to manipulate the market in their favor. The prices are calculated based on real world prices of these companies and derivatives (in real time) any market manipulations that were conducted by the players any market corrections of the system Players can buy and sell shares with cash, a resource in the game, at current in-game market value Players can manipulate the market, i.e. let the price of a share either rise or fall, by some amount, over a certain period of time. Manipulating the market requires spending certain in-game resources and is therefore limited. The system continuously corrects market manipulations by letting the in-game prices converge towards their real world counterparts at a rate of 2% of the difference between the two per hour. Because of this market correction mechanism, pushing up prices (and screwing down prices) becomes increasingly difficult the higher (lower) the price already is. Goals Players are supposed to collide (and have incentives to collide) in their efforts to manipulate the market in their favor, especially when it comes to manipulation efforts by different groups. Prices should not resolve around any equilibrium points. The more variance the better. Band-wagoning should always involve risk (recognizing that prices start rising should not be a sure sign that they will keep rising so that everybody can make easy profits even when they don't manipulate the market themselves) Question Are there any game-theoretic considerations that prevent the mechanic from achieving these goals?

    Read the article

  • Hosting a website on Heroku.... I know how to, but im running into problems!

    - by Thomas Miller
    I'm starting to learn more on the back-end scale of programing. Recently I started up Heroku for the second or third time. This time I actually installed the Git update to my Mac and installed Heroku in the terminal. I wanted to upload a static html site with the sinatra gem. Everything worked out fine inside the terminal, though I added sinatra after I got everything working and the file with the site hooked up to Heroku. In my logs I did see that I was missing the sinatra gem, so I installed it. My site contains both the proper app.rb and config.ru files. I have nothing showing up online. Just a blank screen! Contacting Heroku on this problem has been very difficult. I get a responce every day, and on every day I respond with a question to the answer that didn't help me at all. 2011-05-18T00:25:20+00:00 app[web.1]: 71.198.0.51 - - [17/May/2011 17:25:20] "GET /favicon.ico HTTP/1.1" 404 18 0.0008 2011-05-18T00:25:20+00:00 heroku[router]: GET pxlc.heroku.com/favicon.ico dyno=web.1 queue=0 wait=0ms service=2ms bytes=313 2011-05-18T00:25:26+00:00 app[web.1]: 71.198.0.51 - - [17/May/2011 17:25:26] "GET /favicon.ico HTTP/1.1" 404 18 0.0008 2011-05-18T00:25:26+00:00 heroku[router]: GET pxlc.heroku.com/favicon.ico dyno=web.1 queue=0 wait=0ms service=5ms bytes=313 2011-05-17T18:25:51-07:00 heroku[web.1]: Idling 2011-05-17T18:26:01-07:00 heroku[web.1]: State changed from up to down 2011-05-18T01:26:01+00:00 heroku[web.1]: Stopping process with SIGTERM 2011-05-18T01:26:01+00:00 app[web.1]: Stopping ... 2011-05-18T01:26:02+00:00 heroku[web.1]: Process exited 2011-05-17T20:12:46-07:00 heroku[web.1]: Unidling 2011-05-17T20:12:47-07:00 heroku[web.1]: State changed from created to starting 2011-05-18T03:12:48+00:00 heroku[web.1]: Starting process with command: thin -p 40055 -e production -R /home/heroku_rack/heroku.ru start 2011-05-18T03:12:49+00:00 app[web.1]: Thin web server (v1.2.6 codename Crazy Delicious) 2011-05-18T03:12:49+00:00 app[web.1]: Maximum connections set to 1024 2011-05-18T03:12:49+00:00 app[web.1]: Listening on 0.0.0.0:40055, CTRL+C to stop 2011-05-18T03:12:50+00:00 heroku[router]: GET pxlc.heroku.com/ dyno=web.1 queue=0 wait=9954ms service=6ms bytes=565 2011-05-18T03:12:50+00:00 app[web.1]: 70.91.206.114 - - [17/May/2011 20:12:50] "GET /style.css HTTP/1.1" 200 - 0.0012 2011-05-18T03:12:50+00:00 heroku[router]: GET pxlc.heroku.com/style.css dyno=web.1 queue=0 wait=0ms service=2ms bytes=269 2011-05-17T20:12:50-07:00 heroku[web.1]: State changed from starting to up 2011-05-18T03:12:51+00:00 app[web.1]: 70.91.206.114 - - [17/May/2011 20:12:51] "GET /favicon.ico HTTP/1.1" 404 18 0.0008 2011-05-18T03:12:51+00:00 heroku[router]: GET pxlc.heroku.com/favicon.ico dyno=web.1 queue=0 wait=0ms service=4ms bytes=313 2011-05-18T03:13:05+00:00 heroku[router]: GET pxlc.heroku.com/ dyno=web.1 queue=0 wait=0ms service=5ms bytes=565 2011-05-18T03:13:05+00:00 app[web.1]: 70.91.206.114 - - [17/May/2011 20:13:05] "GET / HTTP/1.1" 200 293 0.0011 2011-05-18T03:13:05+00:00 heroku[router]: GET pxlc.heroku.com/favicon.ico dyno=web.1 queue=0 wait=0ms service=2ms bytes=313 2011-05-18T03:13:05+00:00 app[web.1]: 70.91.206.114 - - [17/May/2011 20:13:05] "GET /favicon.ico HTTP/1.1" 404 18 0.0007 2011-05-18T03:57:05+00:00 app[web.1]: 172.18.33.56, 58.96.134.66 - - [17/May/2011 20:57:05] "GET / HTTP/1.1" 200 293 0.0007 2011-05-18T03:57:05+00:00 heroku[router]: GET pxlc.heroku.com/ dyno=web.1 queue=0 wait=0ms service=4ms bytes=565 2011-05-18T03:57:05+00:00 app[web.1]: 172.18.33.56, 58.96.134.66 - - [17/May/2011 20:57:05] "GET /style.css HTTP/1.1" 200 - 0.0007 2011-05-18T03:57:05+00:00 heroku[router]: GET pxlc.heroku.com/style.css dyno=web.1 queue=0 wait=0ms service=2ms bytes=269 2011-05-18T03:57:08+00:00 app[web.1]: 172.18.33.56, 58.96.134.66 - - [17/May/2011 20:57:08] "GET /favicon.ico HTTP/1.1" 404 18 0.0008 2011-05-17T21:58:27-07:00 heroku[web.1]: Idling 2011-05-18T04:58:30+00:00 heroku[web.1]: Stopping process with SIGTERM 2011-05-18T04:58:30+00:00 app[web.1]: Stopping ... 2011-05-18T04:58:30+00:00 heroku[web.1]: Process exited 2011-05-17T21:58:33-07:00 heroku[web.1]: State changed from up to down 2011-05-17T23:11:58-07:00 heroku[web.1]: Unidling 2011-05-17T23:11:58-07:00 heroku[web.1]: State changed from created to starting 2011-05-18T06:12:00+00:00 heroku[web.1]: Starting process with command: thin -p 40091 -e production -R /home/heroku_rack/heroku.ru start 2011-05-18T06:12:01+00:00 app[web.1]: Thin web server (v1.2.6 codename Crazy Delicious) 2011-05-18T06:12:01+00:00 app[web.1]: Maximum connections set to 1024 2011-05-18T06:12:01+00:00 app[web.1]: Listening on 0.0.0.0:40091, CTRL+C to stop 2011-05-18T06:12:01+00:00 app[web.1]: 183.97.156.226 - - [17/May/2011 23:12:01] "GET / HTTP/1.1" 200 293 0.0017 2011-05-18T06:12:02+00:00 heroku[router]: GET pxlc.heroku.com/ dyno=web.1 queue=0 wait=3209ms service=5ms bytes=565 2011-05-18T06:12:03+00:00 app[web.1]: 183.97.156.226 - - [17/May/2011 23:12:03] "GET /style.css HTTP/1.1" 200 - 0.0019 2011-05-17T23:12:08-07:00 heroku[web.1]: State changed from starting to up 2011-05-18T00:13:13-07:00 heroku[web.1]: Idling 2011-05-18T00:13:16-07:00 heroku[web.1]: State changed from up to down 2011-05-18T07:13:16+00:00 heroku[web.1]: Stopping process with SIGTERM 2011-05-18T07:13:16+00:00 app[web.1]: Stopping ... 2011-05-18T07:13:17+00:00 heroku[web.1]: Process exited 2011-05-18T01:54:21-07:00 heroku[web.1]: Unidling 2011-05-18T01:54:21-07:00 heroku[web.1]: State changed from created to starting 2011-05-18T08:54:23+00:00 heroku[web.1]: Starting process with command: thin -p 59491 -e production -R /home/heroku_rack/heroku.ru start 2011-05-18T08:54:24+00:00 app[web.1]: Thin web server (v1.2.6 codename Crazy Delicious) 2011-05-18T08:54:24+00:00 app[web.1]: Maximum connections set to 1024 2011-05-18T08:54:24+00:00 app[web.1]: Listening on 0.0.0.0:59491, CTRL+C to stop 2011-05-18T01:54:28-07:00 heroku[web.1]: State changed from starting to up 2011-05-18T08:54:28+00:00 heroku[router]: GET pxlc.heroku.com/ dyno=web.1 queue=0 wait=6943ms service=6ms bytes=565 2011-05-18T08:54:28+00:00 app[web.1]: 62.244.82.72 - - [18/May/2011 01:54:28] "GET / HTTP/1.1" 200 293 0.0018 2011-05-18T08:54:28+00:00 heroku[router]: GET pxlc.heroku.com/style.css dyno=web.1 queue=0 wait=0ms service=2ms bytes=269 2011-05-18T08:54:28+00:00 app[web.1]: 62.244.82.72 - - [18/May/2011 01:54:28] "GET /style.css HTTP/1.1" 200 - 0.0014 2011-05-18T08:54:28+00:00 app[web.1]: 62.244.82.72 - - [18/May/2011 01:54:28] "GET /favicon.ico HTTP/1.1" 404 18 0.0008 2011-05-18T08:54:28+00:00 heroku[router]: GET pxlc.heroku.com/favicon.ico dyno=web.1 queue=0 wait=0ms service=1ms bytes=313 2011-05-18T08:54:28+00:00 heroku[router]: GET pxlc.heroku.com/favicon.ico dyno=web.1 queue=0 wait=0ms service=4ms bytes=313 2011-05-18T08:54:28+00:00 app[web.1]: 62.244.82.72 - - [18/May/2011 01:54:28] "GET /favicon.ico HTTP/1.1" 404 18 0.0008 2011-05-18T08:54:28+00:00 app[web.1]: 62.244.82.72 - - [18/May/2011 01:54:28] "GET /favicon.ico HTTP/1.1" 404 18 0.0008 2011-05-18T08:54:28+00:00 heroku[router]: GET pxlc.heroku.com/favicon.ico dyno=web.1 queue=0 wait=0ms service=1ms bytes=313 2011-05-18T02:55:23-07:00 heroku[web.1]: Idling 2011-05-18T02:55:33-07:00 heroku[web.1]: State changed from up to down 2011-05-18T09:55:34+00:00 heroku[web.1]: Stopping process with SIGTERM 2011-05-18T09:55:34+00:00 app[web.1]: Stopping ... 2011-05-18T09:55:34+00:00 heroku[web.1]: Process exited 2011-05-18T07:23:10-07:00 heroku[web.1]: State changed from created to starting 2011-05-18T14:23:12+00:00 heroku[web.1]: Starting process with command: thin -p 20560 -e production -R /home/heroku_rack/heroku.ru start 2011-05-18T14:23:13+00:00 app[web.1]: Thin web server (v1.2.6 codename Crazy Delicious) 2011-05-18T14:23:13+00:00 app[web.1]: Maximum connections set to 1024 2011-05-18T14:23:13+00:00 app[web.1]: Listening on 0.0.0.0:20560, CTRL+C to stop 2011-05-18T07:23:13-07:00 heroku[web.1]: State changed from starting to up 2011-05-18T14:23:14+00:00 app[web.1]: 12.183.19.10 - - [18/May/2011 07:23:14] "GET / HTTP/1.1" 200 293 0.0018 2011-05-18T14:23:14+00:00 heroku[router]: GET pxlc.heroku.com/ dyno=web.1 queue=0 wait=0ms service=7ms bytes=565 2011-05-18T14:23:14+00:00 app[web.1]: 12.183.19.10 - - [18/May/2011 07:23:14] "GET /style.css HTTP/1.1" 200 - 0.0015 2011-05-18T14:23:14+00:00 heroku[router]: GET pxlc.heroku.com/style.css dyno=web.1 queue=0 wait=0ms service=2ms bytes=269 2011-05-18T14:23:14+00:00 app[web.1]: 12.183.19.10 - - [18/May/2011 07:23:14] "GET /favicon.ico HTTP/1.1" 404 18 0.0009 2011-05-18T14:23:14+00:00 heroku[router]: GET pxlc.heroku.com/favicon.ico dyno=web.1 queue=0 wait=0ms service=2ms bytes=313 2011-05-18T08:24:03-07:00 heroku[web.1]: Idling 2011-05-18T08:24:07-07:00 heroku[web.1]: State changed from up to down 2011-05-18T15:24:07+00:00 heroku[web.1]: Stopping process with SIGTERM 2011-05-18T15:24:07+00:00 app[web.1]: Stopping ... 2011-05-18T17:34:27-07:00 heroku[web.1]: Unidling 2011-05-18T17:34:28-07:00 heroku[web.1]: State changed from created to starting 2011-05-19T00:34:29+00:00 heroku[web.1]: Starting process with command: thin -p 57621 -e production -R /home/heroku_rack/heroku.ru start 2011-05-18T17:34:31-07:00 heroku[web.1]: State changed from starting to up 2011-05-19T00:34:32+00:00 heroku[router]: GET pxlc.heroku.com/ dyno=web.1 queue=0 wait=0ms service=5ms bytes=565 2011-05-19T00:34:32+00:00 app[web.1]: 97.83.58.74 - - [18/May/2011 17:34:32] "GET / HTTP/1.1" 200 293 0.0016 2011-05-19T00:34:32+00:00 app[web.1]: 97.83.58.74 - - [18/May/2011 17:34:32] "GET /style.css HTTP/1.1" 200 - 0.0011 2011-05-19T00:34:32+00:00 heroku[router]: GET pxlc.heroku.com/style.css dyno=web.1 queue=0 wait=0ms service=2ms bytes=269 2011-05-19T00:34:34+00:00 heroku[router]: GET pxlc.heroku.com/favicon.ico dyno=web.1 queue=0 wait=0ms service=4ms bytes=313 2011-05-19T00:34:34+00:00 app[web.1]: 97.83.58.74 - - [18/May/2011 17:34:34] "GET /favicon.ico HTTP/1.1" 404 18 0.0007 2011-05-18T18:35:48-07:00 heroku[web.1]: Idling 2011-05-18T18:35:51-07:00 heroku[web.1]: State changed from up to down

    Read the article

  • LINQ – TakeWhile and SkipWhile methods

    - by nmarun
    I happened to read about these methods on Vikram's blog and tried testing it. Somehow when I saw the output, things did not seem to add up right. I’m writing this blog to show the actual workings of these methods. Let’s take the same example as showing in Vikram’s blog and I’ll build around it. 1: int[] numbers = { 5, 4, 1, 3, 9, 8, 6, 7, 2, 0 }; 2:  3: foreach(var number in numbers.TakeWhile(n => n < 7)) 4: { 5: Console.WriteLine(number); 6: } Now, the way I (incorrectly) read the upper bound condition in the foreach loop was: ‘Give me all numbers that pass the condition of n<7’. So I was expecting the answer to be: 5, 4, 1, 3, 2, 0. But when I run the application, I see only: 5, 4, 1,3. Turns out I was wrong (happens at least once a day). The documentation on the method says ‘Returns elements from a sequence as long as a specified condition is true. To show in code, my interpretation was the below code’: 1: foreach (var number in numbers) 2: { 3: if (number < 7) 4: { 5: Console.WriteLine(number); 6: } 7: } But the actual implementation is: 1: foreach(var number in numbers) 2: { 3: if(number < 7) 4: { 5: Console.WriteLine(number); 6: break; 7: } 8: } So there it is, another situation where one simple word makes a difference of a whole world. The SkipWhile method has been implemented in a similar way – ‘Bypasses elements in a sequence as long as a specified condition is true and then returns the remaining elements’ and not ‘Bypasses elements in a sequence where a specified condition is true and then returns the remaining elements’. (Subtle.. very very subtle). It’s feels strange saying this, but hope very few require to read this article to understand these methods.

    Read the article

  • Faq and Best tips Regarding Learning Database ?

    - by AdityaGameProgrammer
    For a programmer with no prior exposure to databases What would be a good database to learn Oracle vs SQLserver vs MySQLvs PostgreSQL? I have come across lot of discussion MySQL and PostgreSQL and frankly I am confused on which to start with. Are these very different, in the sense if one had to switch, would the exposure to one be counter-productive to learning the other? Is working with databases heavily platform dependent? What exactly do people mean by Data base programming vs. administration? Do people chose databases based on the programming language used for the application developed? In general, Working with databases is it implicit that we work with some server? Does the choice of databases differ when it comes to game development? If so what factors does it differ by? What are the Best Tips that you have found to be useful when learning databases Edit: Some FAQ i had and found the same on SO What should every developer know about databases? Which database if learning from scratch in 2010? For a beginner, is there much difference between MySQL and PostgreSQL What RDBMS should I learn/use? (MySql/SQL Server/Oracle etc.) To what extent should a developer learn database? How are database programmers different from other programmers? what kind of database are used in games?

    Read the article

  • How bad is it to use display: none in CSS?

    - by Andy
    I've heard many times that it's bad to use display: none for SEO reasons, as it could be an attempt to push in irrelevant popular keywords. A few questions: Is that still received wisdom? Does it make a difference if you're only hiding a single word, or perhaps a single character? If you should avoid any use of it, what are the preferred techniques for hiding (in situations where you need it to become visible again on certain conditions)? Some references I've found so far: Matt Cutts from 2005 in a comment If you're straight-out using CSS to hide text, don't be surprised if that is called spam. I'm not saying that mouseovers or DHTML text or have-a-logo-but-also-have-text is spam; I answered that last one at a conference when I said "imagine how it would look to a visitor, a competitor, or someone checking out a spam report. If you show your company's name and it's Expo Markers instead of an Expo Markers logo, you should be fine. If the text you decide to show is 'Expo Markers cheap online discount buy online Expo Markers sale ...' then I would be more cautious, because that can look bad." And in another comment on the same article We can flag text that appears to be hidden using CSS at Google. To date we have not algorithmically removed sites for doing that. We try hard to avoid throwing babies out with bathwater. (My emphasis) Eric Enge said in 2008 The legitimate use of this technique is so prevalent that I would rarely expect search engines to penalize a site for using the display: none attribute. It’s just very difficult to implement an algorithm that could truly ferret out whether the particular use of display: none is meant to deceive the search engines or not. Thanks in advance, Andy

    Read the article

  • Suggested Web Application Framework and Database for Enterprise, “Big-Data” App?

    - by willOEM
    I have a web application that I have been developing for a small group within my company over the past few years, using Pipeline Pilot (plus jQuery and Python scripting) for web development and back-end computation, and Oracle 10g for my RDBMS. Users upload experimental genomic data, which is parsed into a database, and made available for querying, transformation, and reporting. Experimental data sets are large and have many layers of metadata. A given experimental data record might have a foreign key relationship with a table that describes this data point's assay. Assays can cover multiple genes, which can have multiple transcript, which can have multiple mutations, which can affect multiple signaling pathways, etc. Users need to approach this data from any point in those layers in the metadata. Since all data sets for a given data type can run over a billion rows, this results in some large, dynamic queries that are hard to predict. New data sets are added on a weekly basis (~1GB per set). Experimental data is never updated, but the associated metadata can be updated weekly for a few records and yearly for most others. For every data set insert the system sees, there will be between 10 and 100 selects run against it and associated data. It is okay for updates and inserts to run slow, so long as queries run quick and are as up-to-date as possible. The application continues to grow in size and scope and is already starting to run slower than I like. I am worried that we have about outgrown Pipeline Pilot, and perhaps Oracle (as the sole database). Would a NoSQL database or an OLAP system be appropriate here? What web application frameworks work well with systems like this? I'd like the solution to be something scalable, portable and supportable X-years down the road. Here is the current state of the application: Web Server/Data Processing: Pipeline Pilot on Windows Server + IIS Database: Oracle 10g, ~1TB of data, ~180 tables with several billion-plus row tables Network Storage: Isilon, ~50TB of low-priority raw data

    Read the article

  • BizTalk 2009 - The Scope of the Table Looping Functoid

    - by StuartBrierley
    When mapping in BizTalk you will find there are times when you need to map from flat and dispersed elemements in your source schema to a repeated record with child elements in your destination schema.  Below is an example of how you can make use of the Table Looping Functoid to bring together these flat elements and create your repeated group.  Although this example is purposely simple, I have previsouly encounted this issue on a much more complex scale when mapping the response from a credit scoring agency where all the applicant details were supplied in separate parts of a very flat schema. Consider the source and destination schemas as follows:   Although the Table Looping Functoid states that the first input must be a scoping element linked from a repeating group, you can actually also make use of a constant value.  In this case I know that the source schema always contains two people, so I set this to two. Then you need to set the number of columns in your table, in this case 2 (name and sex) and link all the required fields from the source schema. Following this you can configure the table. You can then add the Table Extractor functoids and complete the map. If you now validate this map you will see that BizTalk will warn you about the scoping link for the Table Looping Functoid, but this can be safely ignored. C:\Code\Developer Folders\Stuart Brierley\Test Mapping\TableLooping.btm: warning btm1071: A first input of the Table-Looping functoid must be a link from a Source Tree Node which acts as the scoping parameter. Testing the map will produce the following output:

    Read the article

  • Is there a way to track data structure dependencies from the database, through the tiers, all the way out to a web page?

    - by Sean Mickey
    When we design applications, we generally end up with the same tiered sets of data structures: A persistent data structure that is described using DDL and implemented as RDBMS tables and columns. A set of domain objects that consist primarily of data structures, usually combined with business-rule level logic, that are implemented in a programming language such as Java. A set of service layer interfaces that directly support use case implementations (which use the domain data structures as parameters), implemented as EJBs or something equivalent in another programming language. UI screens that allow users to C reate, R etrieve, U pdate, and (maybe) D elete all manner of data structures and graphs of data structures, with numerous screens and with multiple UI widgets, all structured to support the same data structures. But if you want to change the data structures in any of these tiers, it always seems extremely difficult to assess the impact(s) the change will have across the application. UML can help, but tracing through diagram after diagram is not a real solution to this problem. The best I have ever seen was a homespun data tracking spreadsheet document that listed all of the data structures and walked the relationships from tier-to-tier. Is there a tool or accepted approach that makes it easy to identify a data structure in any tier and easily obtain a list of all dependent: database table and column data structures domain object data structures service layer interface methods and parameter data structures screen & UI component data structures

    Read the article

  • Ubuntu 12.04.1 Radeon 9550 stuck with 640x480, works in Geexbox

    - by Betty
    I am a complete new user trying to set up Ubuntu on an old desktop. It has an AGP Radeon 9550 graphics card. I am running Ubuntu from a USB drive with persistence as the PC currently has no hard drive I seem to be stuck in 640*480 mode. The desktop itself is larger, but the monitor display is stuck on 640*480. In SettingsDisplays, only the 640*480 option is available. What I have found out so far: The proprietary ati drivers no longer support my card. If 3D isn't an issue (it's not) the open source driver should be fine. This should be installed by default so in theory I am using it already xserver-xconf/pci/*.ids doesn't show any entries for the card's PCI id. hardware additional drivers show no proprietary drivers I tried the booting into the current version of Geexbox from a USB stick and this set the resolution correctly by default so I know it can be done, but I know no idea how. How can I tell what driver the card is using, and how can I get the higher resolutions back?

    Read the article

  • ASP.NET 4.0- Html Encoded Expressions

    - by Jalpesh P. Vadgama
    We all know <%=expression%> features in asp.net. We can print any string on page from there. Mostly we are using them in asp.net mvc. Now we have one new features with asp.net 4.0 that we have HTML Encoded Expressions and this prevent Cross scripting attack as we are html encoding them. ASP.NET 4.0 introduces a new expression syntax <%: expression %> which automatically convert string into html encoded. Let’s take an example for that. I have just created an hello word protected method which will return a simple string which contains characters that needed to be HTML Encoded. Below is code for that. protected static string HelloWorld() { return "Hello World!!! returns from function()!!!>>>>>>>>>>>>>>>>>"; } Now let’s use the that hello world in our page html like below. I am going to use both expression to give you exact difference. <form id="form1" runat="server"> <div> <strong><%: HelloWorld()%></strong> </div> <div> <strong><%= HelloWorld()%></strong> </div> </form> Now let’s run the application and you can see in browser both look similar. But when look into page source html in browser like below you can clearly see one is HTML Encoded and another one is not. That’s it.. It’s cool.. Stay tuned for more.. Happy Programming Technorati Tags: ASP.NET 4.0,HTMLEncode,C#4.0

    Read the article

  • SQL SERVER – Solution – Puzzle – Challenge – Error While Converting Money to Decimal

    - by pinaldave
    Earlier I had posted quick puzzle and I had received wonderful response to the same. Today we will go over the solution. The puzzle was posted here: SQL SERVER – Puzzle – Challenge – Error While Converting Money to Decimal Run following code in SSMS: DECLARE @mymoney MONEY; SET @mymoney = 12345.67; SELECT CAST(@mymoney AS DECIMAL(5,2)) MoneyInt; GO Above code will give following error: Msg 8115, Level 16, State 8, Line 3 Arithmetic overflow error converting money to data type numeric. Why and what is the solution? Solution is as following: DECLARE @mymoney MONEY; SET @mymoney = 12345.67; SELECT CAST(@mymoney AS DECIMAL(7,2)) MoneyInt; GO There were more than 20 valid answers. Here is the reason. Decimal data type is defined as Decimal (Precision, Scale), in other words Decimal (Total digits, Digits after decimal point).. Precision includes Scale. So Decimal (5,2) actually means, we can have 3 digits before decimal and 2 digits after decimal. To accommodate 12345.67 one need higher precision. The correct answer would be DECIMAL (7,2) as it can hold all the seven digits. Here are the list of the experts who have got correct answer and I encourage all of you to read the same over hear. Fbncs Piyush Srivastava Dheeraj Abhishek Anil Gurjar Keval Patel Rajan Patel Himanshu Patel Anurodh Srivastava aasim abdullah Paulo R. Pereira Chintak Chhapia Scott Humphrey Alok Chandra Shahi Imran Mohammed SHIVSHANKER The very first answer was provided by Fbncs and Dheeraj had very interesting comment. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Issuing Current Time Increments in StreamInsight (A Practical Example)

    The issuing of a Current Time Increment, Cti, in StreamInsight is very definitely one of the most important concepts to learn if you want your Streams to be responsive. A full discussion of how to issue Ctis is beyond the scope of this article but a very good explanation in addition to Books Online can be found in these three articles by a member of the StreamInsight team at Microsoft, Ciprian Gerea. Time in StreamInsight Series http://blogs.msdn.com/b/streaminsight/archive/2010/07/23/time-in-streaminsight-i.aspx http://blogs.msdn.com/b/streaminsight/archive/2010/07/30/time-in-streaminsight-ii.aspx http://blogs.msdn.com/b/streaminsight/archive/2010/08/03/time-in-streaminsight-iii.aspx A lot of the problems I see with unresponsive or stuck streams on the MSDN Forums are to do with how Ctis are enqueued or in a lot of cases not enqueued. If you enqueue events and never enqueue a Cti then StreamInsight will be perfectly happy. You, on the other hand, will never see data on the output as you have not told StreamInsight to flush the stream. This article deals with a specific implementation problem I had recently whilst working on a StreamInsight project. I look at some possible options and discuss why they would not work before showing the way I solved the problem. The stream of data I was dealing with on this project was very bursty that is to say when events were flowing they came through very quickly and in large numbers (1000 events/sec), but when the stream calmed down it could be a few seconds between each event. When enqueuing events into the StreamInsight engne it is best practice to do so with a StartTime that is given to you by the system producing the event . StreamInsight processes events and it doesn't matter whether those events are being pushed into the engine by a source system or the events are being read from something like a flat file in a directory somewhere. You can apply the same logic and temporal algebra to both situations. Reading from a file is an excellent example of where the time of the event on the source itself is very important. We could be reading that file a long time after it was written. Being able to read the StartTime from the events allows us to define windows that will hold the correct sets of events. I was able to do this with my stream but this is where my problems started. Below is a very simple script to create a SQL Server table and populate it with sample data that will show exactly the problem I had. CREATE TABLE [dbo].[t] ( [c1] [int] PRIMARY KEY, [c2] [datetime] NULL ) INSERT t VALUES (1,'20100810'),(2,'20100810'),(3,'20100810') Column c2 defines the StartTime of the event on the source and as you can see the values in all 3 rows of data is the same. If we read Ciprian’s articles we know that we can define how Ctis get injected into the stream in 3 different places The Stream Definition The Input Factory The Input Adapter I personally have always been a fan of enqueing Ctis through the factory. Below is code typical of what I would use to do this On the class itself I do some inheriting public class SimpleInputFactory : ITypedInputAdapterFactory<SimpleInputConfig>, ITypedDeclareAdvanceTimeProperties<SimpleInputConfig> And then I implement the following function public AdapterAdvanceTimeSettings DeclareAdvanceTimeProperties<TPayload>(SimpleInputConfig configInfo, EventShape eventShape) { return new AdapterAdvanceTimeSettings( new AdvanceTimeGenerationSettings(configInfo.CtiFrequency, TimeSpan.FromTicks(-1)), AdvanceTimePolicy.Adjust); } The configInfo .CtiFrequency property is a value I pass through to define after how many events I want a Cti to be injected and this in turn will flush through the stream of data. I usually pass a value of 1 for this setting. The second parameter determines the CTI timestamp in terms of a delay relative to the events. -1 ticks in the past results in 1 tick in the future, i.e., ahead of the event. The problem with this method though is that if consecutive events have the same StartTime then only one of those events will be enqueued. In this example I use the following to define how I assign the StartTime of my events currEvent.StartTime = (DateTimeOffset)dt.c2; If I go ahead and run my StreamInsight process with this configuration i can see on the output adapter that two events have been removed To see this in a little more depth I can use the StreamInsight Debugger and see what happens internally. What is happening here is that the first event arrives and a Cti is injected with a time of 1 tick after the StartTime of that event (Also the EndTime of the event). The second event arrives and it has a StartTime of before the Cti and even though we specified AdvanceTimePolicy.Adjust on the factory we know that a point event can never be adjusted like this and the event is dropped. The same happens for the third event as well (The second and third events get trumped by the Cti). For a more detailed discussion of why this happens look here http://www.sqlis.com/sqlis/post/AdvanceTimePolicy-and-Point-Event-Streams-In-StreamInsight.aspx We end up with a single event being pushed into the output adapter and our result now makes sense. The next way I tried to solve this problem by changing the value of the second parameter to TimeSpan.Zero Here is how my factory code now looks public AdapterAdvanceTimeSettings DeclareAdvanceTimeProperties<TPayload>(SimpleInputConfig configInfo, EventShape eventShape) { return new AdapterAdvanceTimeSettings( new AdvanceTimeGenerationSettings(configInfo.CtiFrequency, TimeSpan.Zero), AdvanceTimePolicy.Adjust); } What I am doing here is declaring a policy that says inject a Cti together with every event and stamp it with a StartTime that is equal to the start time of the event itself (TimeSpan.Zero). This method has plus points as well as a downside. The upside is that no events will be lost by having the same StartTime as previous events. The Downside is that because the Cti is declared with the StartTime of the event itself then it does not actually flush that particular event because in the StreamInsight algebra, a Cti commits only those events that occurred strictly before them. To flush the events we need a Cti to be enqueued with a greater StartTime than the events themselves. Here is what happened when I ran this configuration As you can see all we got through was the Cti and none of the events. The debugger output shows the stamps on the Cti and the events themselves. Because the Cti issued has the same timestamp (StartTime) as the events then none of the events get flushed. I was nearly there but not quite. Because my stream was bursty it was possible that the next event would not come along for a few seconds and this was far too long for an event to be enqueued and not be flushed to the output adapter. I needed another solution. Two possible solutions crossed my mind although only one of them made sense when I explored it some more. Where multiple events have the same StartTime I could add 1 tick to the first event, two to the second, three to third etc thereby giving them unique StartTime values. Add a timer to manually inject Ctis The problem with the first implementation is that I would be giving the events a new StartTime. This would cause me the following problems If I want to define windows over the stream then some events may not be captured in the right windows and therefore any calculations on those windows I did would be wrong What would happen if we had 10,000 events with the same StartTime? I would enqueue them with StartTime + n ticks. Along comes a genuine event with a StartTime of the very first event + 1 tick. It is now too far in the past as far as my stream is concerned and it would be dropped. Not what I would want to do at all. I decided then to look at the Timer based solution I created a timer on my input adapter that elapsed every 200ms. private Timer tmr; public SimpleInputAdapter(SimpleInputConfig configInfo) { ctx = new SimpleTimeExtractDataContext(configInfo.ConnectionString); this.configInfo = configInfo; tmr = new Timer(200); tmr.Elapsed += new ElapsedEventHandler(t_Elapsed); tmr.Enabled = true; } void t_Elapsed(object sender, ElapsedEventArgs e) { ts = DateTime.Now - dtCtiIssued; if (ts.TotalMilliseconds >= 200 && TimerIssuedCti == false) { EnqueueCtiEvent(System.DateTime.Now.AddTicks(-100)); TimerIssuedCti = true; } }   In the t_Elapsed event handler I find out the difference in time between now and when the last event was processed (dtCtiIssued). I then check to see if that is greater than or equal to 200ms and if the last issuing of a Cti was done by the timer or by a genuine event (TimerIssuedCti). If I didn’t do this check then I would enqueue a Cti every time the timer elapsed which is not something I wanted. If the difference between the two times is greater than or equal to 500ms and the last event enqueued was by a real event then I issue a Cti through the timer to flush the event Queue, otherwise I do nothing. When I enqueue the Ctis into my stream in my ProduceEvents method I also set the values of dtCtiIssued and TimerIssuedCti   currEvent = CreateInsertEvent(); currEvent.StartTime = (DateTimeOffset)dt.c2; TimerIssuedCti = false; dtCtiIssued = currEvent.StartTime; If I go ahead and run this configuration I see the following in my output. As we can see the first Cti gets enqueued as before but then another is enqueued by the timer and because this has a later timestamp it flushes the enqueued events through the engine. Conclusion Hopefully this has shown how the enqueuing of Ctis can have a dramatic effect on the responsiveness of your output in StreamInsight. Understanding the temporal nature of the product is for me one of the most important things you can learn. I have attached my solution for the demos. It is all in one project and testing each variation is a simple matter of commenting and un-commenting the parts in the code we have been dealing with here.

    Read the article

  • They Wrote The Book On It

    - by steve.diamond
    First of all, an apology to you all for my not posting this yesterday, when I should have. For those of you bloggers out there, you know the difference between "Save" and "Preview." But I temporarily forgot it. Nevertheless, while I'm not impressed with this mishap, I'm blown away by the initiative three of my colleagues have taken. Jeff Saenger, Tim Koehler, and Louis Peters, recently wrote a book, "Oracle CRM On Demand Deployment Guide." Not only that, they got this book PUBLISHED. These guys know their stuff. They have worked in the CRM industry for many years. And trust me, they command a lot of respect inside this organization. In the words of Louis Peters (who posted this verbiage yesterday on LinkedIn), "We've assembled all the best practices and lessons learned over the past six years working with CRM On Demand. The book covers a range of topics - working with SaaS-based applications, planning and executing a successful rollout, designing elegant and high-performing applications, and working effectively with Oracle. We even included several sample designs based on successful real-world deployments. Our main target audience is the CRM On Demand project team - sponsors, project managers, administrators, developers - really anyone planning, implementing or maintaining the application." Now these guys don't know it, but I'll be interviewing one of them and including audio excerpts of that conversation right here next Wednesday. In the meantime, if you want to learn more about successful CRM deployments in general, and working with Oracle CRM On Demand in particular, you should check out this book.

    Read the article

  • Unity3D and Texture2D. GetPixel returns wrong values

    - by Heisenbug
    I'm trying to use Texture2D set and get colors, but I encountered a strange behavior. Here's the code to reproduce it: Texture2D tex = new Texture2D(2,2, TextureFormat.RGBA32 ,false); Color col = new Color(1.0f,0.5f,1.0f,0.5f); //col values: 1.00, 0.500, 1.00, 0.500 tex.setPixel(0,0,col); Color colDebug = tex.getPixel(0,0); //col values: 1.00, 0.502, 1.00, 0.502 The Color retrieved with getPixel is different from the Color set before. I initially thought about float approximation, but when inspectin col the value stored are correct, so can't be that reason. It sounds weird even a sampling error because the getValue returns a value really similar that not seems to be interpolated with anything else. Anyway I tried even to add these lines after building the texture but nothing change: this.tex.filterMode = FilterMode.Point; this.tex.wrapMode = TextureWrapMode.Clamp; this.tex.anisoLevel = 1; What's my mistake? What am I missing? In addition to that. I'm using tex to store Rect coordinates returned from atlas generation, in order to be able of retriving the correct uv coordinate of an atlas inside a shader. Is this a right way to go?

    Read the article

  • 550 “Overwrite permission denied” when editing a file via FTP

    - by nodebunny
    DreamHost recently moved my accounts to a new shared box, and now I can't edit files via UltraEdit's built in FTP client, which messes up my work flow! What did they do that this is not working now? It stopped working after they moved me. Here's the output from the FTP console in UltraEdit 10/26/2011 10:42:36 AM: 220 DreamHost FTP Server 10/26/2011 10:42:36 AM: USER nodebunny 10/26/2011 10:42:36 AM: 331 Password required for ninjawww 10/26/2011 10:42:36 AM: PASS xxxxxxxx 10/26/2011 10:42:36 AM: 230 User nodebunny logged in 10/26/2011 10:42:36 AM: FEAT 10/26/2011 10:42:36 AM: 211-Features: LANG ja-JP.UTF-8;ja-JP;zh-TW;fr-FR;zh-CN;en-US*;bg-BG;ko-KR.UTF-8;ko-KR MDTM MFMT TVFS UTF8 MFF modify;UNIX.group;UNIX.mode; MLST modify*;perm*;size*;type*;unique*;UNIX.group*;UNIX.mode*;UNIX.owner*; REST STREAM SIZE 211 End 10/26/2011 10:42:36 AM: OPTS UTF8 ON 10/26/2011 10:42:36 AM: 200 UTF8 set to on 10/26/2011 10:42:36 AM: PWD 10/26/2011 10:42:36 AM: 257 "/" is the current directory 10/26/2011 10:42:36 AM: PWD 10/26/2011 10:42:36 AM: 257 "/" is the current directory 10/26/2011 10:42:36 AM: CWD /dev/proj/nodebunny 10/26/2011 10:42:36 AM: 250 CWD command successful 10/26/2011 10:42:36 AM: PWD 10/26/2011 10:42:36 AM: 257 "/dev/proj/nodebunny/lib/Buffer" is the current directory 10/26/2011 10:42:36 AM: PWD 10/26/2011 10:42:37 AM: 257 "/dev/proj/nodebunny/lib/Buffer" is the current directory 10/26/2011 10:42:37 AM: TYPE I 10/26/2011 10:42:37 AM: 200 Type set to I 10/26/2011 10:42:37 AM: PORT 10,15,55,125,226,16 10/26/2011 10:42:37 AM: 200 PORT command successful 10/26/2011 10:42:37 AM: STOR Buffer.pm 10/26/2011 10:42:37 AM: 550 Buffer.pm: Overwrite permission denied

    Read the article

  • Does a site's bounce rate influence Google rankings?

    - by Joel Spolsky
    Does Google consider bounce rate or something similar in ranking sites? Background: here at Stack Exchange we noticed that the latest Google algorithm changes resulted in about a 20% dip in traffic to Server Fault (and a much smaller dip in traffic to Super User). Stack Overflow traffic was not affected. There was an article on WebProNews which hypothesized that bounce rate might be a ranking signal in Google's latest Panda update. According to Google Analytics, these are our bounce rates over the last month: Site Bounce Rate Avg Time on Site ------------- ----------- ---------------- SuperUser 84.67% 01:16 ServerFault 83.76% 00:53 Stack Overflow 63.63% 04:12 Now, technically, Google has no way to know the bounce rate. If you go to Google, search for something, and click on the first result, Google can't tell the difference between: a user who turns off their computer a user who goes to a completely different web site a user who spends hours clicking around on the website they landed on What Google does know is how long it takes the user to come back to Google and do another search. According to the book In The Plex (page 47), Google distinguishes between what they call "short clicks" and "long clicks": A short click is a search where the user quickly comes back to Google and does another search. Google interprets this as a signal that the first search results were unsatisfactory. A long click is a search where the user doesn't search again for a long time. The book says that Google uses this information internally, to judge the quality of their own algorithms. It also said that short click data in which someone retypes a slight variation of the search is used to fuel the "Did you mean...?" spell checking algorithm. So, my hypothesis is that Google has recently decided to use long click rates as a signal of a high quality site. Does anyone have any evidence of this? Have you seen any high-bounce-rate sites which lost traffic (or vice-versa)?

    Read the article

< Previous Page | 701 702 703 704 705 706 707 708 709 710 711 712  | Next Page >