Search Results

Search found 69 results on 3 pages for 'j rightly'.

Page 2/3 | < Previous Page | 1 2 3  | Next Page >

  • Session timeout issue

    - by Kumar
    I have a role based ASP.NET C# web application in which I am putting the menu object inside a session and I have a session timeout configured in the web.config as below: <forms defaultUrl="Home.aspx" loginUrl="Login.aspx" name=".ASPXFORMSAUTH" timeout="10"></forms> I first logged into the system as an employee and waited until the session expires and then when I click a link in the menu I am being rightly redirected to the login page with the ReturnUrl parameter. Now when I try to login to the system as an administrator I am still seeing the employee menu and not the admin menu. The method which loads the menu 1st checks to see if the menu session object is not null if so loads the menu from the session if not then it builds the menu and put it into session. So when the system timesout the menu session object is not being cleared. How can I fix this?

    Read the article

  • iPhone SDK Internet connection detection

    - by givp
    I'm working on an iPhone application that makes a few calls to web services. I posted this application on the Apple store but it got rejected (and rightly so) since there was no error message displayed to the user if no Internet connection is available. Since obviously the application would not work without it. So I just wanted to know how to best achieve this? I'm guessing something needs to go in the viewDidLoad method that will throw an alert box saying something like "You need an Internet connection to use this application". Any ideas would be appreciated.

    Read the article

  • When is JavaScript's eval() not evil?

    - by Richard Turner
    I'm writing some JavaScript to parse user-entered functions (for spreadsheet-like functionality). Having parsed the formula I could convert it into JavaScript and run eval() on it to yield the result. However, I've always shied away from using eval() if I can avoid it because it's evil (and, rightly or wrongly, I've always thought it is even more evil in JavaScript because the code to be evaluated might be changed by the user). Obviously one has to use eval() to parse JSON (I presume that JS libraries use eval() for this somewhere, even if they run the JSON through a regex check first), but when else, other than when manipulating JSON, it is OK to use eval()?

    Read the article

  • Asp.Net MVC2 TekPub Starter Site methodology question

    - by Pino
    Ok I've just ran into this and I was only supposed to be checking my emails however I've ended up watching this (and not far off subscribing to TekPub). http://tekpub.com/production/starter Now these app is a great starting point, but it raises one issue for me and the development process I've been shown to follow (rightly or wrongly). There is no conversion from the LinqToSql object when passing data to the view. Are there any negitives to this? The main one I can see is with validation, does this cause issues when using MVC's built in validation as this is somthing we use extensivly. Because we are using the built in objects generated by LinqToSql how would one go about adding validation, like [Required(ErrorMessage="Name is Required")] public string Name {get;set;} Interested to understand the benifits of this methodology and any negitives that, should we take it on, experiance through the development process.

    Read the article

  • Can I make controls defined in my markup public instead of protected

    - by RoboShop
    Say I have a web site with a master page and an aspx page. In my ASPX page, I am pointing to my masterpage with the MasterType tag. <%@ MasterType VirtualPath="~/mymasterpage.master" % Say, I've defined a label in the markup of my master page. If you look at the designer code, this label should be something like this. protected global::System.Web.UI.WebControls.Label label1; Now in my content page, I would like to reference this label. If I type in this "Master.label1", the complier will complain that the control is inaccessible due to the protection level" and rightly so, as label1 is automatically defined as "protected". My question is, if I define controls in my markup page, is it possible to set these controls as public instead of protected? I do not see an attribute for it. thanks in advance.

    Read the article

  • ASP.NET Web User Control with Javascript used multiple times on a page - How to make javascript func

    - by Jason Summers
    I think I summed up the question in the title. Here is some further elaboration... I have a web user control that is used in multiple places, sometimes more than once on a given page. The web user control has a specific set of JavaScript functions (mostly jQuery code) that are containted within *.js files and automatically inserted into page headers. However, when I want to use the control more than once on a page, the *.js files are included 'n' number of times and, rightly so, the browser gets confused as to which control it's meant to be executing which function on. What do I need to do in order to resolve this problem? I've been staring at this all day and I'm at a loss. All comments greatly appreciated. Jason

    Read the article

  • SSDT - What's in a name?

    - by jamiet
    SQL Server Data Tools (SSDT) recently got released as part of SQL Server 2012 and depending on who you believe it can be described as either: a suite of tools for building SQL Server database solutions or a suite of tools for building SQL Server database, Integration Services, Analysis Services & Reporting Services solutions Certainly the SQL Server 2012 installer seems to think it is the latter because it describes SQL Server Data Tools as "the SQL server development environment, including the tool formerly named Business Intelligence Development Studio. Also installs the business intelligence tools and references to the web installers for database development tools" as you can see here: Strange then that, seemingly, there is no consensus within Microsoft about what SSDT actually is. On yesterday's blog post First Release of SSDT Power Tools reader Simon Lampen asked the quite legitimate question:I understand (rightly or wrongly) that SSDT is the replacement for BIDS for SQL 2012 and have just installed this. If this is the case can you please point me to how I can edit rdl and rdlc files from within Visual Studio 2010 and import MS Access reports.To which came the following reply:SSDT doesn't include any BIDs (sic) components. Following up with the appropriate team (Analysis Services, Reporting Services, Integration Services) via their forum or msdn page would be the best way to answer you questions about these kinds of services. That's from a Microsoft employee by the way. Simon is even more confused by this and replies with:I have done some more digging and am more confused than ever. This documentation (and many others) : msdn.microsoft.com/.../ms156280.aspx expressly states that SSDT is where report editing tools are to be foundAnd on it goes....You can see where Simon's confusion stems from. He has official documentation stating that SSDT includes all the stuff for building SSIS/SSAS/SSRS solutions (this is confirmed in the installer, remember) yet someone from Microsoft tells him "SSDT doesn't include any BIDs components".I have been close to this for a long time (all the way through the CTPs) so I can kind of understand where the confusion stems from. To my understanding SSDT was originally the name of the database dev stuff but eventually that got expanded to include all of the dev tools - I guess not everyone in Microsoft got the memo.Does this sound familiar? Have we not been down this road before? The database dev tools have had upteen names over the years (do any of datadude, TSData, VSTS for DB Pros, DBPro, VS2010 Database Projects sound familiar) and I was hoping that the SSDT moniker would put all confusion to bed - evidently its as complicated now as it has ever been.Forgive me for whinging but putting meaningful, descriptive, accurate, well-defined and easily-communicated names onto a product doesn't seem like a difficult thing to do. I guess I'm mistaken!Onwards and upwards...@Jamiet

    Read the article

  • Engagement: Don’t Forget Your Employees!

    - by Kellsey Ruppel
    By Mark Brown, Sr. Director, Oracle WebCenter  This week we want to focus on Employee Engagement, and how it is critical to your business. Today we hear and read a great deal about “Customer Engagement” – and rightly so, it is those customers, whether they be traditional paying customers, citizens, students, club members, or whomever it is that are “paying the bills”.  A more engaged customer is more likely to make it easier to pay those bills by buying more, giving good reviews, or spreading the word of how wonderful their experience was. But what about those who are providing those services, those who design and make those goods; why is it that all too often they are left out of conversations concerning engagement? In fact, it is critical that we consider our employees as customers since they are using internal systems that run your organization the same way customers use external systems. Studies have shown that an organization in which the employees feel “engaged” or better able to make decisions, do their jobs, and are connected to their peers have better return to their stakeholders. (shareholders).  On the surface this seems obvious, happy employees are more productive employees. But it leads to the question – how many of our existing policies, systems and processes are actually reducing that level of engagement? Let’s look at a couple examples. If posting new information that may be of great value to everyone in the larger organization is hard to do because we use an antiquated system, then we’re making it hard to share and increasing the potential for duplicate work. If it is not trivially obvious how to create and publish this post, then chances are very high that I’ll put it on the bottom of my queue. And finally, when critical information is spread across various systems, intranet sites, workgroups and peoples inboxes, then it is very hard to learn and grow from that information.  These may sound trivial, but how often do we push things off not because it is intellectually challenging, we may have the answer at our fingertips, but because it is hard to make that information readily available.  If an engaged employee is a productive employee, then what can we do to increase their level of engagement? We can start by looking for opportunities to provide self-documenting self-service solutions. Our newer employees grew up using simplified web interfaces everyday and they loathe calling a help-desk unless it is the last resort. Sadly, many of our enterprise applications have not kept pace and we all still have processes that are based on sending an email -- like discount approvals, vacation requests, or even offer-letter approvals.   My suggestion is to pick one highly visible, high-impact process where employees are either reticent to execute on the process or openly complain about how cumbersome it is and look at the mechanism for that process. If there are better ways, streamlined steps, better UIs that could be done, then you have a candidate to reconfigure that process and make it more engaging. Looking to better engage your employees? Start here!

    Read the article

  • IDC Analyst Report Touts Oracle–Accenture Strategic Initiative

    - by kristin.jellison
    Hi there, partners! Oracle Engineered Systems have been getting some love lately, and we want to share it with you! The market intelligence and advisory firm IDC recently released a report lauding Oracle and Accenture’s strategic initiative to route the performance and flexibility of Oracle Engineered Systems to clients. The report, "Oracle and Accenture Strategic Alliance Places Big Bet on Engineered Systems,” by Steve White, reflects a largely positive analysis of the relationship. White notes that the alliance is “one of the largest in the industry.” Under the relationship, Accenture has incorporated Oracle Engineered Systems—including Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, Oracle SuperCluster, and Oracle Exalytics In-Memory Machine—into its leading datacenter transformation consulting services. Together, the two companies have also created bespoke platforms, such as the Accenture Foundation Platform for Oracle, which helps clients accelerate deployments on Oracle Fusion Middleware, running Oracle Exalogic Elastic Cloud and Oracle Exadata Database Machine. Oracle Engineered Systems deliver a single, engineered platform—including server to storage and networking. This makes it easier and cheaper for Accenture clients around the world to prepare their datacenters for managing, processing and analyzing the massive amounts of data they (rightly) anticipate seeing in the next decade. The new solutions can help reduce the effort and cost to migrate any vendor database to an Oracle Engineered Systems platform, which can lower the cost of ownership by up to 50 percent. For its part, Accenture has built a team of 300 consultants to implement and increase the flexibility and stability of client datacenters. This move further expands one of the fastest-growing full-service Oracle Enterprise solutions. Over 52,000 Accenture consultants are qualified to implement, upgrade and outsource the Oracle product suite. Accenture is a Diamond-level member of Oracle PartnerNetwork (OPN). For Oracle Partners, this update should give you at least two things to walk away with. First, this initiative is showing signs of success. As Marty Cole, group chief executive for Accenture’s Technology growth platform, put it, “We are seeing an increasing number of clients recognizing the value of consolidating their databases and taking advantage of the cost and performance benefits delivered by these solutions.” The pipeline is there—and not just for Accenture. Use this example to show your clients that investments in Oracle Engineered Systems are on the rise. Second, recognize that Oracle Engineered Systems represent one of the biggest platforms for growth that Oracle has to offer partners. As part of the agreement, Accenture is able to provide: Platform Readiness Assessments Platform Implementation App Rationalization Database Rationalization Managed Services These are all enablement opportunities you can offer customers under Oracle’s partner programs —to continue building the value of their investments, and the value of your relationship with Oracle. Take a read through the IDC report. To learn more about the partnership, see this press release. Happy selling! The OPN Communications Team

    Read the article

  • Are SQL Injection vulnerabilities in a PHP application acceptable if mod_security is enabled?

    - by Austin Smith
    I've been asked to audit a PHP application. No framework, no router, no model. Pure PHP. Few shared functions. HTML, CSS, and JS all mixed together. I've discovered numerous places where SQL injection would be easily possible. There are other problems with the application (XSS vulnerabilities, rampant inline CSS, code copy-pasted everywhere) but this is the biggest. Sometimes they escape inputs, not using a prepared query or even mysql_real_escape_string(), mind you, but using addslashes(). Often, though, their queries look exactly like this (pasted from their code but with columns and variable names changed): $user = mysql_query("select * from profile where profile_id='".$_REQUEST["profile_id"]."'"); The developers in question claimed that they were unable to hack their application. I tried, and found mod_security to be enabled, resulting in HTTP 406 for some obvious SQL injection attacks. I believe there to be sophisticated workarounds for mod_security, but I don't have time to chase them down. They claim that this is a "conceptual" matter and not a "practical" one since the application can't easily be hacked. Their internal auditor agreed that there were problems, but emphasized the conceptual nature of the issues. They also use this conceptual/practical argument to defend against inline CSS and JS, absence of code organization, XSS vulnerabilities, and massive amounts of repetition. My client (rightly so, perhaps) just wants this to go away so they can launch their product. The site works. You can log in, do what you need to do, and things are visibly functional, if slow. SQL Injection would indeed be hard to do, given mod_security. Further, their talk of "conceptual vs. practical" is rhetorically brilliant, considering that my client doesn't understand web application security. I worry that they've succeeded in making me sound like an angry puritan. In many ways, this is a problem of politics, not technology, but I am at a loss. As a developer, I want to tell them to toss the whole project and start over with a new team, but I face a strong defense from the team that built it and a client who really needs to ship their product. Is my position here too harsh? Even if they fix the SQL Injection and XSS problems can I ever endorse the release of an unmaintainable tangle of spaghetti code?

    Read the article

  • Data Virtualization: Federated and Hybrid

    - by Krishnamoorthy
    Data becomes useful when it can be leveraged at the right time. Not only enterprises application stores operate on large volume, velocity and variety of data. Mobile and social computing are in the need of operating in foresaid data. Replicating and transferring large swaths of data is one challenge faced in the field of data integration. However, smaller chunks of data aggregated from a variety of sources presents and even more interesting challenge in the industry. Over the past few decades, technology trends focused on best user experience, operating systems, high performance computing, high performance web sites, analysis of warehouse data, service oriented architecture, social computing, cloud computing, and big data. Operating on the ‘dark data’ becomes mandatory in the future technology trend, although, no solution can make dark data useful data in a single day. Useful data can be quantified by the facts of contextual, personalized and on time delivery. In most cases, data from a single source may not be complete the picture. Data has to be combined and computed from various sources, where data may be captured as hybrid data, meaning the combination of structured and unstructured data. Since related data is often found across disparate sources, effectively integrating these sources determines how useful this data ultimately becomes. Technology trends in 2013 are expected to focus on big data and private cloud. Consumers are not merely interested in where data is located or how data is retrieved and computed. Consumers are interested in how quick and how the data can be leveraged. In many cases, data virtualization is the right solution, and is expected to play a foundational role for SOA, Cloud integration, and Big Data. The Oracle Data Integration portfolio includes a data virtualization product called ODSI (Oracle Data Service Integrator). Unlike other data virtualization solutions, ODSI can perform both read and write operations on federated/hybrid data (RDBMS, Webservices,  delimited file and XML). The ODSI Engine is built on XQuery, hence ODSI user can perform computations on data either using XQuery or SQL. Built in data and query caching features, which reduces latency in repetitive calls. Rightly positioning ODSI, can results in a highly scalable model, reducing spend on additional hardware infrastructure.

    Read the article

  • Password Manager that allows syncing accross platforms

    - by lexu
    I use OS X, Linux, Solaris and windows for work and from home. There are good tools that allow me to manage the many logins/passwords required platform independently. But mostly they expect me to carry a thumb-drive around or require direct access to a central location (a sky drive in the cloud). The thumb-drive is too easily lost (= synchronized backup needed), the central location not always reachable/ mountable. Besides company policy rightly prevents this often. Is there a tool that allows me to add passwords locally and then syncs it's DB with the "mother-ship" later. Or is there another approach that you use, that solves my problem? EDIT My question is more about "synchronize" than cross platform. I've evaluated (=read feature list) some good cross platform tools, but need one that does the synchronizing for me. By synchronize I mean "merge two versions" not "replace (hopefully) old file with new." I'm not sure I'm always disciplined/awake enough to prevent data loss. UPDATE Lifehacker just posted that AgileSolutions now have a beta version of 1Password for Windows.

    Read the article

  • A tiered approach to cloning linux partitons

    - by Djurdjura
    I'm looking at a strategy for cloning Linux (root) partitions without having to use a Live CD. Literature suggests rightly that the source and target partitions must be umounted to be able to get a clean clone. This assumes that you need to use a LiveCD. I was wondering if instead of requiring a LiveCD, if using a 3rd partition that would emulate the LiveCD functionality, if we can't achieve the same functionality. In other words, at a high level a system with 3 partitions (all bootable): Rescue Partition (LiveCD emulation) Running Partition (Source) Backup Partition (Destination) All 3 partitions are LVMS. When it's time to clone the source partition to the backup (destination) partition, we would boot to the rescue partition, unmount the other 2 partitions (is it required?), run disk check on the source, copy to the destination (dd or simple copy to avoid replicating the defragmentation from the source), run disk check on the destination partition, update Grub menu list to force boot from either partition, and reboot into that partition. My question, is it an approach that you'd recommend? MBR in all this? Any gotchas or extra checks required? Thanks, D. PS. On recommendation from members, posting here instead of stackoverflow.com.

    Read the article

  • Volume licenced copy of MS Office 2007 shows "Non Commercial Use" in title bar

    - by Linker3000
    I have just removed the demo copy of Office 2007 preinstalled on a new laptop and replaced it with an install of the full professional edition downloaded from the MS Volume Licensing site and installed one of our volume licence keys, yet the apps (Word etc.) show "Non Commercial Use" in the title bar, which is what usually happens in the Home and Student edition. I have tried: Deleting the Office registration keys in the registry and using one of our other Office 2007 volume licence keys (we have 7) when prompted to re-register Uninstalling Office completely and reinstalling it from a newly-downloaded ISO burned to CD and also from a compressed file that installs from hard disk/USB stick (both from Microsoft - no dodgy stuff) Yet the non-commercial message persists. Although it's a cosmetic issue, the laptop is going to be used for customer presentations and so the sales person is rightly concerned about the image this portrays. I presume there may be something floating around the registry or in a file somewhere but I can't find it. Articles I have found elsewhere just refer to the message being related to the use of a Home and Student licence key, which is 100% not the case. Any thoughts? Thanks.

    Read the article

  • How does one skip "Windows did not shut down successfully" in Win7-64?

    - by XenonofArcticus
    Migrating an app from an expensive and unreliable dedicated embedded x86 box running WinXP-embedded to COTS hardware (Dell E6410 laptop) running normal Win7-64. At this time, it's not feasible to deploy using Windows 7 embedded. The problem is, that the system is still sort of "embedded". The power could shut off at virtually any time without prior warning. We've stripped the OS down and removed the battery capability so that it will power down as desired. The app never writes to the disk, so it's not like we're going to corrupt anything terribly. The system is essentially idle after our app is up and running (with the exception of some computation, graphics, and TCP/IP and serial communications) so the OS enters a pretty stable state rather quickly. After a power-loss however, it rightly complains that Windows did not shut down successfully and presents the user with the Windows Error Recovery text screen. If left alone, it does eventually move on booting just fine, but we'd like to skip that step if possible. WinXP-embedded is designed to do this automatically, so I know it's possible. I've looked at the Kernel Switches but I didn't see anything documented for "Skip Windows Error Recovery". I've also read extensively on the startup process: http://homepage.ntlworld.com./jonathan.deboynepollard/FGA/windows-nt-6-boot-process.html I know I can disable the auto chkdsk in the registry, but that's not the same thing either. So, how do I streamline the boot process to not hassle the user about a situation that will be the regular normal situation?

    Read the article

  • How does one skip “Windows did not shut down successfully” in Win7-64?

    - by XenonofArcticus
    Migrating an app from an expensive and unreliable dedicated embedded x86 box running WinXP-embedded to COTS hardware (Dell E6410 laptop) running normal Win7-64. At this time, it's not feasible to deploy using Windows 7 embedded. The problem is, that the system is still sort of "embedded". The power could shut off at virtually any time without prior warning. We've stripped the OS down and removed the battery capability so that it will power down as desired. The app never writes to the disk, so it's not like we're going to corrupt anything terribly. The system is essentially idle after our app is up and running (with the exception of some computation, graphics, and TCP/IP and serial communications) so the OS enters a pretty stable state rather quickly. After a power-loss however, it rightly complains that Windows did not shut down successfully and presents the user with the Windows Error Recovery text screen. If left alone, it does eventually move on booting just fine, but we'd like to skip that step if possible. WinXP-embedded is designed to do this automatically, so I know it's possible. I've looked at the Kernel Switches but I didn't see anything documented for "Skip Windows Error Recovery". I've also read extensively on the startup process: http://homepage.ntlworld.com./jonathan.deboynepollard/FGA/windows-nt-6-boot-process.html I know I can disable the auto chkdsk in the registry, but that's not the same thing either. So, how do I streamline the boot process to not hassle the user about a situation that will be the regular normal situation?

    Read the article

  • What is a suitable simple, open web server for Windows?

    - by alficles
    I'm looking for a dead simple web server for Windows. Load will not be high as it will be primarily serving binaries for a WPKG update service. It needs to serve the entire contents of a single folder over HTTP on a configurable (high) port. No CGI or other scripting is required, but it might be nice for future features. I started with Mongoose, since it doesn't even have an installation requirement (a very nice perk), but it fails to start when run as a service. (Technically, it acts as it's own installer.) I've investigated LighTPD as well, but it appears to be minimally (at best) tested on Windows. And naturally, I'm looking for something free. As in beer is good, but speech is better, as always. Edit: I didn't mention this initially, but non-tech people will be doing the install. They'll have whatever script I write for the install, but the goal is a simple system that is easy to troubleshoot. (I almost worded this question "What is the best...", but Serverfault rightly observed that that is a subjective question. And it's really not an optimization problem, any suitable solution will work. I just can't seem to find one for Windows.)

    Read the article

  • What are developer's problems with helpful error messages?

    - by Moo-Juice
    It continue to astounds me that, in this day and age, products that have years of use under their belt, built by teams of professionals, still to this day - fail to provide helpful error messages to the user. In some cases, the addition of just a little piece of extra information could save a user hours of trouble. A program that generates an error, generated it for a reason. It has everything at its disposal to inform the user as much as it can, why something failed. And yet it seems that providing information to aid the user is a low-priority. I think this is a huge failing. One example is from SQL Server. When you try and restore a database that is in use, it quite rightly won't let you. SQL Server knows what processes and applications are accessing it. Why can't it include information about the process(es) that are using the database? I know not everyone passes an Applicatio_Name attribute on their connection string, but even a hint about the machine in question could be helpful. Another candidate, also SQL Server (and mySQL) is the lovely string or binary data would be truncated error message and equivalents. A lot of the time, a simple perusal of the SQL statement that was generated and the table shows which column is the culprit. This isn't always the case, and if the database engine picked up on the error, why can't it save us that time and just tells us which damned column it was? On this example, you could argue that there may be a performance hit to checking it and that this would impede the writer. Fine, I'll buy that. How about, once the database engine knows there is an error, it does a quick comparison after-the-fact, between values that were going to be stored, versus the column lengths. Then display that to the user. ASP.NET's horrid Table Adapters are also guilty. Queries can be executed and one can be given an error message saying that a constraint somewhere is being violated. Thanks for that. Time to compare my data model against the database, because the developers are too lazy to provide even a row number, or example data. (For the record, I'd never use this data-access method by choice, it's just a project I have inherited!). Whenever I throw an exception from my C# or C++ code, I provide everything I have at hand to the user. The decision has been made to throw it, so the more information I can give, the better. Why did my function throw an exception? What was passed in, and what was expected? It takes me just a little longer to put something meaningful in the body of an exception message. Hell, it does nothing but help me whilst I develop, because I know my code throws things that are meaningful. One could argue that complicated exception messages should not be displayed to the user. Whilst I disagree with that, it is an argument that can easily be appeased by having a different level of verbosity depending on your build. Even then, the users of ASP.NET and SQL Server are not your typical users, and would prefer something full of verbosity and yummy information because they can track down their problems faster. Why to developers think it is okay, in this day and age, to provide the bare minimum amount of information when an error occurs? It's 2011 guys, come on.

    Read the article

  • Google, typography, and cognitive fluency for persuasion

    - by Roger Hart
    Cognitive fluency is - roughly - how easy it is to think about something. Mere Exposure (or familiarity) effects are basically about reacting more favourably to things you see a lot. Which is part of why marketers in generic spaces like insipid mass-market lager will spend quite so much money on getting their logo daubed about the place; or that guy at the bus stop starts to look like a dating prospect after a month or two. Recent thinking suggests that exposure effects likely spin off cognitive fluency. We react favourably to things that are easier to think about. I had to give tech support to an older relative recently, and suggested they Google the problem. They were confused. They could not, apparently, Google the problem, because part of it was that their Google toolbar had mysteriously vanished. Once I'd finished trying not to laugh, I started thinking about typography. This is going somewhere, I promise. Google is a ubiquitous brand. Heck, it's a verb, and their recent, jaw-droppingly well constructed Paris advert is more or less about that ubiquity. It trades on Google's integration into any information-seeking behaviour. But, as my tech support encounter suggests, people settle into comfortable patterns of thinking about things. They build schemas, and altering them can take work. Maybe the ubiquity even works to cement that. Alongside their online effort, Google is running billboard campaigns to advertise Chrome, a free product in a crowded space. They are running these ads in some kind of kooky Calibri / Comic Sans hybrid. Now, at first it seems odd that one of the world's more ubiquitous brands needs to run a big print campaign in public places - surely they have all the fluency they need? Well, not so much. Chrome, after all, is not the same as their core product, so there's some basic awareness work to do, and maybe a whole new batch of exposure effect to try and grab. But why the typeface? It's heavily foregrounded, and the ads are extremely textual. Plus, don't we all know that jovial, off-beat fonts look unprofessional, or something? There's a whole bunch of people who want (often rightly) to ban Comic Sans I wonder, though. Are Google trying to subtly disrupt cognitive fluency? There's an interesting paper (pdf) about - among other things - the effects of typography on they way people answer survey questions. Participants given the slightly harder to read question gave more abstract answers. The paper references other work suggesting that generally speaking, less-fluent question framing elicits more considered answers. The Chrome ad typeface is less fluent for print. Reactions may therefore be more considered, abstract, and disruptive. Is that, in fact, what Google need? They have brand ubiquity, but they want here to change accustomed behaviour, to get people to think about changing their browser. Is this actually a very elegant piece of persuasive information design? If you think about their "what is a browser?" vox pop research video, there's certainly a perceptual barrier they're going to have to tackle somehow.

    Read the article

  • Django authentication in django nonrel on GAE

    - by tooba
    I'm using the Django nonrel project on a google app engine project running locally in development. I've created my own models and these are fine when they are saved and retrieved in the datastore. I'm hoping to use django.contrib.auth to provide the user functionality. I can use the shell to create users and these get assigned an ID. When I create one of my own models which references User I have to pass in a user ID as it quite rightly fails otherwise. However, checking via the gae admin interface I can't see the User model in the datastore for the users I've created via the shell. Nor can I retreive the user details from one of my models which references them. Calls to mymodel.user.username return nothing. Nor can I log into admin using the username and password I've set up. I can see saved versions of the models I've made in the gae admin app. I get the impression that users are being created somewhere other than the datastore. Is there something else I need to do to use the standard contrib.auth users with django-nonrel and gae?

    Read the article

  • When am I ready for contracting work?

    - by Kirk Broadhurst
    What skills are required to work on a temp / contracting basis, and how does a developer know when they are ready to work in these circumstances? I have some colleagues who are suggesting contracting work is 'the way to go'; the pay is significantly better. It appears that when a permanent position pays (X times $10k) per year, the corresponding contract position pays almost $X per hour - which is close to twice as much. I look forward to doing this type of work as an experienced expert, but worry that by doing so right now I'd be turning away learning and development opportunities. My assumptions about contract work are the following: less / zero money invested on training and development less concern for job satisfaction and learning ("they won't be here in 12 months") possibly less concern about the overall quality of the project ("we won't be here in 12 months") There are similar questions on stack overflow at the moment but what I've really looking for is: What does the developer give up by moving to contract work? Is it preferable to have structured learning in a permanent position rather than seat-of-the-pants learning in a contract position? What skill leve should the contractor have before making the move? Is there still the same kind of growth in contracting that there is in permanent positions? Rightly or wrongly, I see this as a choice between $ (contracting) and learning / development (permanent). Is this fair?

    Read the article

  • So can unique_ptr be used safely in stl collections?

    - by DanDan
    I am confused with unique_ptr and rvalue move philosophy. Let's say we have two collections: std::vector<std::auto_ptr<int>> autoCollection; std::vector<std::unique_ptr<int>> uniqueCollection; Now I would expect the following to fail, as there is no telling what the algorithm is doing internally and maybe making internal pivot copies and the like, thus ripping away ownership from the auto_ptr: std::sort(autoCollection.begin(), autoCollection.end()); I get this. And the compiler rightly disallows this happening. But then I do this: std::sort(uniqueCollection.begin(), uniqueCollection.end()); And this compiles. And I do not understand why. I did not think unique_ptrs could be copied. Does this mean a pivot value cannot be taken, so the sort is less efficient? Or is this pivot actually a move, which in fact is as dangerous as the collection of auto_ptrs, and should be disallowed by the compiler? I think I am missing some crucial piece of information, so I eagerly await someone to supply me with the aha! moment.

    Read the article

  • VS2008 adding SQL Server Database (SQL Server 2008 Mgmt Studio) not working

    - by Kahn
    I'm trying to practice using the ASP.Net MVC at home, but I ran into an impossible problem. I cannot open a connection to SQL Server 2008, I get this error: "Connections to SQL Server files (*.mdf) require SQL Server Express 2005 to function properly. ..." I've googled around for numerous responses, none of them either working or addressing this issue. I'm running Vista 32bit, my SQL Server 2008 Mgmt Studio is also 32bit, I have SP1 installed both on VS2008 Professional, as well as the SQL Server. I changed the machine.config connectionStrings from ./SQLExpress to my SQL Server 2008 name. Now if I connect manually through web.config, in an asp:datasource or code-behind, everything works fine. But for some reason trying to add a DB Connection directly like this always gets the error. This is pretty fatal, since I can't rightly do much unless I can use LINQ to SQL with my MVC test project, and this is the only way I know how. Worked fine in school and work, but not at home. Installing SQL Server Express 2005, as some have suggested, is not an option. Obviously it HAS to work with SQL Server 2008. Thanks in advance.

    Read the article

  • Issues with securing ELMAH file

    - by Kumar
    I am trying to allow the access to the elmah.axd file for only a perticular login and all others should be denied. I have followed Phil Haack's tutorial for securing the file. <httpHandlers> <remove verb="*" path="*.asmx"/> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" validate="false"/> <add verb="POST,GET,HEAD" path="admin/elmah.axd" type="Elmah.ErrorLogPageFactory, Elmah" /> </httpHandlers> <location path="admin"> <system.web> <authorization> <allow users="[email protected]"/> <deny users="*"/> </authorization> </system.web> </location> First I logged in as [email protected] and tried to access the http://localhost:58961/admin/elmah.axd file and I am rightly being redirected to the Login.aspx page. Next I logged in as [email protected] and was able to access the elmah file at http://localhost:58961/admin/elmah.axd. Now I logged in again as [email protected] and I was able to access the emlah file now. What is the reason for this behavior?

    Read the article

  • Servlet Security question about j_security_check, j_username and j_password

    - by Nitesh Panchal
    Hello, I used jdbcRealm in my web application and it's working fine. I defined all constraints also in my web.xml. Like all pages of url pattern /Admin/* should be accessed by only admin. I have a login form with uses standard j_security_check, j_username and j_password. Now, when i type Admin/home.jsf it rightly redirects me login.jsf and there when i type the password i am redirected to home.jsf. This works alright but problem comes i directly go to login.jsf and then type password and username. This time it again redirects me to login.jsf. Is there any way through which i can specify which page to go when successful login is there? I need to specify different different pages for different roles. For Admin, it is /Admin/home.jsf for general users it is /General/home.jsf because login form is shared between different type of users. Where do i specify all these things? Secondly, i want to have a remember me checkbox at the end of login form. How do i do this? By default, it is submitted to j_security_check servlet and i have no control over its execution. Please help. This doesn't seem so hard but looks like i am missing something.

    Read the article

< Previous Page | 1 2 3  | Next Page >