Search Results

Search found 15061 results on 603 pages for 'modified date'.

Page 454/603 | < Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >

  • Elo system behaves oddly in program I've created

    - by adc
    Alright, so I'm looking to build a small program (C# and XAML) that, essentially, does this: Generate array of players. Each player has a current rating and a true rating. I set current rating to 1200 as a starting point right now; I've also tried setting it to true rating and the average of the two. True rating is what their skill level actually is. Their true rating is calculated based on percentages from the current League of Legends rating system; generating an array of 970 thousand generates results very similar to the data from here: (removed due to URL limit - but trust me, the results are very similar). This array is of length specified by the user. If need be, sort the array from smallest to largest. Play X number of games, again specified by the user. This is done by taking the array of players (which is sorted by Current Rating after being created) and running through it in groups of 10. The first five are on team one, the second five are on team two. It then takes the True Rating of these players and calculates an expected chance to win using the Elo system. It generates a random double and compares it to the expected chance to win; if the number is lower, team one wins - otherwise team two wins. I then update the rating of the players via, again, the Elo system - giving the winning team a score of 1 and the losing team a score of 0. I use a K value of 36 (but have tried 12, 24, and even higher ones) and an F value of 400. After going through the entire loop of players (which I have conveniently forced to be a multiple of ten), it sorts the array - again via current rating. This, if my understanding of the Elo system is correct, runs properly. However, it doesn't seem to work. I have a running test telling me how many players of the full array are within 100 current rating of their true rating. I would expect some portion of the population to be outside this range (as probability is not always going to go in their favor), but a full 40-45% of the population is outside of this range. I also have it outputting the maximum difference between true and current rating - and I have never seen this drop below 500! It hovers between 550-600, occasionally going over or under. I'm at a loss as to what to change - I've fiddled with the K and F values, where I start all the players, etc. but nothing changes the fact that eventually a good 40% of the population is outside the range. And it isn't that I have it playing too few games - it's now run through over 60 thousand games and the problem never disappears or really fluctuates. The full C# code, including everything except the XAML file and the Player class (pastebin is being very slow and I can only post two links, so I can't link to the XAML file): http://pastebin.com/rFcZRL84 The Player class: http://pastebin.com/4cJTdTRu I guess my question is did I do anything wrong? Is there a problem with the way I implemented the system, or is it just that Riot uses a significantly modified Elo system? I don't think it's the latter, as that still wouldn't explain the massive true and current rating differences to me, however.

    Read the article

  • WebAPI and MVC4 and OData

    - by Aligned
    I was looking closer into WebAPI, specificially how to use OData to avoid writing GetCustomerByCustomerId(int id) methods all over the place. I had problems just returning IQueryable<T> as some sites suggested in the WebpAPI (Assembly System.Web.Http.dll, v4.0.0.0).  I think things changed in the release version and the blog posts are still out of date. There is no [Queraable] as the answer to this question suggests. Once I get WebAPI.Odata Nuget package, and added the [Queryable] to the method http://localhost:57146/api/values/?$filter=Id%20eq%201 worked (don’t forget the ‘$’). Now the main question is whether I should do this and how to stop logged in users from sniffing the url and getting data for other users. I John V. Peterson has a post on securing WebAPI with headers and intercepting the call at that point. He had an update to use HttpMessageHandlers instead. I think I’ll use this to force the call to contain some kind of unique code for the user, but I’m still thinking about this. I will not expose this to the public, just to my calls with-in my Forms Authentication areas. Other links: http://robbincremers.me/2012/02/16/building-and-consuming-rest-services-with-asp-net-web-api-and-odata-support/ ~lots of good information John V Peterson example: https://github.com/johnvpetersen/ASPWebAPIExample ~ all data access goes through the WebApi and the web client doesn’t have a connection string ~ There is code library for calling the WebApi from MVC using the HttpClient. It’s a great starting point http://blogs.msdn.com/b/alexj/archive/2012/08/15/odata-support-in-asp-net-web-api.aspx ~ Beta (9/18/2012) Nuget package to help with what I want to do? ~ has a sample code project with examples http://blogs.msdn.com/b/alexj/archive/2012/08/15/odata-support-in-asp-net-web-api.aspx http://blogs.msdn.com/b/alexj/archive/2012/08/21/web-api-queryable-current-support-and-tentative-roadmap.aspx http://stackoverflow.com/questions/10885868/asp-net-mvc4-rc-web-api-odata-filter-not-working-with-iqueryable JSON, pass the correct format in the header (Accept: application/json). $format=JSON doesn’t appear to be working. Async methods built into WebApi! Look for the GetAsync methods.

    Read the article

  • Meaning of Crawl errors

    - by com
    My question is about definition of Crawl errors in Google Webmaster Tools. Crawl errors is devided into few sections. Let's first consider HTTP section. I assume that all broken links in this section was somehow found by crawler, this is not the links from sitemap. If all this links was found by scanning pages from sitemap for links, why it doesn't mention what was the source page, like in sitemap section with column Linked From. Please correct me if I am wrong. Sitemap section. Looks like all those links came from my sitemap. But there is Linked From column, I already know, that all those broken links is from sitemap, so in order to fix the error, I should revise my sitemap. Am I wrong? Not followed section. I don't know what does it mean. Looks like it accumulates all links that caused redirect, but for some reason Google considers all those redirect as wrong redirect. Do you know if there are any set of rules how to determine wrong redirect. Actually I found were was my mistake, I tried to normalize URL and redirect it to the right URL, but I did normalization in a wrong way. Not found section. This section like HTTP section but with 404 errors. This section has Linked From column. But very often Linked From has unavailable. What does it mean, Google can not say me how it found this non existing page. How this section related to sitemap section. Does this section contains all 404 links from sitemap too. But there is too many 404 links, much more than in sitemap. I tried to take a look what we have in Linked From, and I saw that this link came from sitemap two month ago. But why Google keeps it indexed, the link is already dead, new sitemap doesn't have it. If there is any expire date for old links? Unreachable section. Looks like this section for 500 errors. This section doesn't contain Linked From column. There are too many completely meaningless links, I really don't know where this stuff came from, and without Linked From I am not able to figure out how to deal with it. Sorry for such a big topic, but I just want to make it clear, what every section stands for, because it's extremely crucial in order to deal with all those problems. Hopefully it will be useful not just for me. Thanks!

    Read the article

  • ArchBeat Link-o-Rama for 2012-04-05

    - by Bob Rhubart
    Webcast: Oracle Maximum Availability Architecture Best Practices event.on24.com Date: Thursday, April 12, 2012 Time: 10:00 AM PDT Oracle expert Tom Kyte discusses how Oracle’s Maximum Availability Architecture can help to minimize the costs and risk of downtime. Oracle Enterprise Manager Ops Center 12c Launch - Interactive Webcast and Live Chat www.oracle.com Thursday, April 12, 2012. 9 a.m. PT / 12 p.m. ET / 4 p.m. GMT. Speakers: Steve Wilson (VP Systems Management, Oracle) John Fowler (Exec VP Systems, Oracle) Brad Cameron (VP Development, Oracle Fusion Middleware) Bill Nesheim (VP Oracle Solaris) Dennis Reno (VP Customer Portal Experience, Oracle) Mike Wookey (Chief Architect, Oracle Enterprise Manager Ops Center) Prasad Pai (Sr Director, Oracle Enterprise Manager Ops Center) 2012 Real World Performance Tour Dates |Performance Tuning | Performance Engineering www.ioug.org Coming to your town: a full day of real world database performance with Tom Kyte, Andrew Holdsworth, and Graham Wood. Rochester, NY - March 8 Los Angeles, CA - April 30 Orange County, CA - May 1 Redwood Shores, CA - May 3 Oracle Technology Network Developer Day: MySQL - New York www.oracle.com Wednesday, May 02, 2012 8:00 AM – 4:30 PM Grand Hyatt New York 109 East 42nd Street, Grand Central Terminal New York, NY 10017 Webcast Series: Data Warehousing Best Practices event.on24.com April 19, 2012 - Best Practices for Workload Management of a Data Warehouse on Oracle Exadata May 10, 2012 - Best Practices for Extreme Data Warehouse Performance on Oracle Exadata How to create a Global Rule that stores a document’s folder path in a custom metadata field | Nicolas Montoya blogs.oracle.com An illustrated how-to from Oracle Fusion Middleware A-Team blogger Nicolas Montoya. Get Proactive with Fusion Middleware | Daniel Mortimer blogs.oracle.com Daniel Mortimer shows how to access "a one stop shop for navigating to proactive support material, tools, and communication channels related to Oracle Fusion Middleware." Build an enterprise on 'other peoples' work', via SOA and cloud | Joe McKendrick www.zdnet.com Are you down with OPW? Joe McKendrick's synopsis of a recent presentation by David Linthicum focuses on reuse. Oracle Fusion Middleware Security: Unsolicited login with OAM 11g | Chris Johnson fusionsecurity.blogspot.com Chris Johnson shows how to create a shopping cart login model using "plain old HTML." How to use the Human WorkFlow Web Services | Edwin Biemond biemond.blogspot.com Oracle ACE Edwin Biemond shows how to invoke two WorkFlow web services to query the Human task in Oracle SOA Suite with your own ordering and restrictions. Bad Practice Use Case for LOV Performance Implementation in ADF BC | Andrejus Baranovskis andrejusb.blogspot.com "If you want to learn something well, there is nothing better [than] to learn bad practices first," says Oracle ACE Director Andrejus Baranovskis. Thought for the Day "The best meetings get real work done. When your people learn that your meetings actually accomplish something, they will stop making excuses to be elsewhere." — Larry Constantine

    Read the article

  • Introducing Windows Azure Mobile Services

    - by Clint Edmonson
    Today I’m excited to share that the Windows Azure Mobile Services public preview is now available. This preview provides a turnkey backend cloud solution designed to accelerate connected client app development. These services streamline the development process by enabling you to leverage the cloud for common mobile application scenarios such as structured storage, user authentication and push notifications. If you’re building a Windows 8 app and want a fast and easy path to creating backend cloud services, this preview provides the capabilities you need. You to take advantage of the cloud to build and deploy modern apps for Windows 8 devices in anticipation of general availability on October 26th. Subsequent preview releases will extend support to iOS, Android, and Windows Phone. Features The preview makes it fast and easy to create cloud services for Windows 8 applications within minutes. Here are the key benefits:  Rapid development: configure a straightforward and secure backend in less than five minutes. Create modern mobile apps: common Windows Azure plus Windows 8 scenarios that Windows Azure Mobile Services preview will support include:  Automated Service API generation providing CRUD functionality and dynamic schematization on top of Structured Storage Structured Storage with powerful query support so a Windows 8 app can seamlessly connect to a Windows Azure SQL database Integrated Authentication so developers can configure user authentication via Windows Live Push Notifications to bring your Windows 8 apps to life with up to date and relevant information Access structured data: connect to a Windows Azure SQL database for simple data management and dynamically created tables. Easy to set and manage permissions. Pricing One of the key things that we’ve consistently heard from developers about using Windows Azure with mobile applications is the need for a low cost and simple offer. The simplest way to describe the pricing for Windows Azure Mobile Services at preview is that it is the same as Windows Azure Websites during preview. What’s FREE? Run up to 10 Mobile Services for free in a multitenant environment Free with valid Windows Azure Free Trial 1GB SQL Database Unlimited ingress 165MB/day egress  What do I pay for? Scaling up to dedicated VMs Once Windows Azure Free Trial expires - SQL Database and egress     Getting Started To start using Mobile Services, you will need to sign up for a Windows Azure free trial, if you have not done so already.  If you already have a Windows Azure account, you will need to request to enroll in this preview feature. Once you’ve enrolled, this getting started tutorial will walk you through building your first Windows 8 application using the preview’s services. The developer center contains more resources to teach you how to: Validate and authorize access to data using easy scripts that execute securely, on the server Easily authenticate your users via Windows Live Send toast notifications and update live tiles in just a few lines of code Our pricing calculator has also been updated for calculate costs for these new mobile services. Questions? Ask in the Windows Azure Forums. Feedback? Send it to [email protected].

    Read the article

  • Live Virtual Class for Partners: Application Management

    - by Patrick Rood
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} November 11-12th Manageability Partner Community Application Management Suite Live Virtual Training This training will be offered to Oracle Partners over a live webcast during business hours. Each day will consist of approximately 2-3 hours of lecture/demos. It will be recorded and available for playback. Purpose: This virtual course is a comprehensive program of training sessions, prepared and presented by Product Managers. This ensures you have all the information you need to position and sell Oracle Application Management Suites. The sessions will be lecture based with demonstrations to complement. These sessions are interactive and everyone will be required to participate. Customer case studies will be used as appropriate and there will be plenty of opportunity for in-depth discussion. Please bring to the training an understanding of what Enterprise Manager 12c does for our customers, along with your own experiences to date. Logistics: Topic: Oracle Application Management Suite Training (2 Days - approx 2-3 Hour per Day) WebEx session details to be provided upon registration. Monday 11th November | 14:00PM GMT | 18:00PM Gulf (GMT+4) Tuesday 12th November | 14:00PM GMT | 18:00PM Gulf (GMT+4) (Back to the top) Copyright © 2012, Oracle. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

    Read the article

  • Should custom data elements be stored as XML or database entries?

    - by meteorainer
    There are a ton of questions like this, but they are mostly very generalized, so I'd like to get some views on my specific usage. General: I'm building a new project on my own in Django. It's focus will be on small businesses. I'd like to make it somewhat customizble for my clients so they can add to their customer/invoice/employee/whatever items. My models would reflect boilerplate items that all ModelX might have. For example: first name last name email address ... Then my user's would be able to add fields for whatever data they might like. I'm still in design phase and am building this myself, so I've got some options. Working on... Right now the 'extra items' models have a FK to the generic model (Customer and CustomerDataPoints for example). All values in the extra data points are stored as char and will be coerced/parced into their actual format at view building. In this build the user could theoretically add whatever values they want, group them in sets and generally access them at will from the views relavent to that model. Pros: Low storage overhead, very extensible, searchable Cons: More sql joins My other option is to use some type of markup, or key-value pairing stored directly onto the boilerplate models. This coul essentially just be any low-overhead method weather XML or literal strings. The view and form generated from the stored data would be taking control of validation and reoganizing on updates. Then it would just dump the data back in as a char/blob/whatever. Something like: <datapoint type='char' value='something' required='true' /> <datapoint type='date' value='01/01/2001' required='false' /> ... Pros: No joins needed, Updates for validation and views are decoupled from data Cons: Much higher storage overhead, limited capacity to search on extra content So my question is: If you didn't live in the contraints impose by your company what method would you use? Why? What benefits or pitfalls do you see down the road for me as a small business trying to help other small businesses? Just to clarify, I am not asking about custom UI elements, those I can handle with forms and template snippets. I'm asking primarily about data storage and retreival of non standardized data relative to a boilerplate model.

    Read the article

  • RewriteRule not working for certain URLs

    - by keiki
    There are a few domains pointing towards the same server, and of course I need them all redirect to only one of them. Redirects work, but only for certain URLs. What works: http://www.domain.com, http://domain.com, domain.com/index.html, domain.com/index.php, , domain.com/nonExistentDirectory, and if I click in the menu the following URLs are also redirected correctly: domain.com/foo/bar, domain.com/foo/bar.html or .php or other extension. What doesn't work: domain.com/existentDirectory, domain.com/foo/bar (if I type the URL in the address bar). If anyone will have the time and skill and will to tell me where's the mistake, I'll be deeply grateful. Here's my .htaccess file: AddHandler x-httpd-php .html .htm <ifModule mod_gzip.c> mod_gzip_on Yes mod_gzip_dechunk Yes mod_gzip_item_include file \.(html?|txt|css|js|php|pl)$ mod_gzip_item_include handler ^cgi-script$ mod_gzip_item_include mime ^text/.* mod_gzip_item_include mime ^application/x-javascript.* mod_gzip_item_exclude mime ^image/.* mod_gzip_item_exclude rspheader ^Content-Encoding:.*gzip.* </ifModule> <ifModule mod_expires.c> ExpiresActive On ExpiresDefault "access plus 1 seconds" ExpiresByType text/html "access plus 1 seconds" ExpiresByType image/gif "access plus 2592000 seconds" ExpiresByType image/jpeg "access plus 2592000 seconds" ExpiresByType image/png "access plus 2592000 seconds" ExpiresByType text/css "access plus 2592000 seconds" ExpiresByType text/javascript "access plus 216000 seconds" ExpiresByType application/x-javascript "access plus 216000 seconds" </ifModule> <ifModule mod_headers.c> <filesMatch "\\.(ico|pdf|flv|jpg|jpeg|png|gif|swf)$"> Header set Cache-Control "max-age=2592000, public" </filesMatch> <filesMatch "\\.(css)$"> Header set Cache-Control "max-age=2592000, public" </filesMatch> <filesMatch "\\.(js)$"> Header set Cache-Control "max-age=216000, private" </filesMatch> <filesMatch "\\.(xml|txt)$"> Header set Cache-Control "max-age=216000, public, must-revalidate" </filesMatch> <filesMatch "\\.(html|htm|php)$"> Header set Cache-Control "max-age=1, private, must-revalidate" </filesMatch> </ifModule> <ifModule mod_headers.c> Header unset ETag </ifModule> FileETag None <ifModule mod_headers.c> Header unset Last-Modified </ifModule> # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress RewriteCond %{HTTP_HOST} ^foo\.com$ [OR] RewriteCond %{HTTP_HOST} ^www\.foo\.com$ RewriteRule (.*) http://domain.com/$1 [R=301,L,QSA] RewriteCond %{HTTP_HOST} ^foo1\.com$ [OR] RewriteCond %{HTTP_HOST} ^www\.foo1\.com$ RewriteRule (.*) http://domain.com/$1 [R=301,L,QSA] RewriteCond %{HTTP_HOST} ^foo2\.com$ [OR] RewriteCond %{HTTP_HOST} ^www\.foo2\.com$ RewriteRule (.*) http://domain.com/$1 [R=301,L,QSA] RewriteCond %{HTTP_HOST} ^foo3\.com$ [OR] RewriteCond %{HTTP_HOST} ^www\.foo3\.com$ RewriteRule (.*) http://domain.com/$1 [R=301,L,QSA] RewriteCond %{HTTP_HOST} ^foo8\.com$ [OR] RewriteCond %{HTTP_HOST} ^www\.foo8\.com$ RewriteRule (.*) http://domain.com/$1 [R=301,L,QSA] Thinking that the above version was overkill, I've also tried to redirect all the requests for domains different than the main on to be redirected to it like this: RewriteCond %{HTTP_HOST} !^domain\.com$ [NC] RewriteRule ^(.*)$ http://domain.com [L,R=301] Is it also wrong? Because it doesn't work either! P.S. @Sodved I've tried that and it doesn't help (I comment here because I can't seem to be able to comment your answer.) Removing the following piece of code didn't solve the issue either, so the problem must be somewhere else: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress New details: using this tool for checking the redirects I got the following results for the URLs that are not redirected: Checked link: http://domain.com/aDirectory/ Type of link: direct link (note the trailing slash above) and: Checked link: http://domain.com/aDirectory Type of redirect: 301 Moved Permanently Redirected to: http://domain.com/aDirectory/ (no trailing slash here) I hope/suspect I'm getting closer to the cause of this behavior.

    Read the article

  • Software monetization that is not evil

    - by t0x1n
    I have a free open-source project with around 800K downloads to date. I've been contacted by some monetization companies from time to time and turned them down, since I didn't want toolbar malware associated with my software. I was wondering however, is there a non-evil way to monetize software ? Here are the options as I know them: Add a donation button. I don't feel comfortable with that as I really don't need "donations" - I'm paid quite well. Donating users may feel entitled to support etc. (see the second to last bullet) Add ads inside your application. In the web that may be acceptable, but in a desktop program it looks incredibly lame. Charge a small amount for each download. This model works well in the mobile world, but I suspect no one will go for it on the desktop. It doesn't mix well with open source, though I suppose I could charge only for the binaries (most users won't go to the hassle of compiling the sources). People may expect support etc. after having explicitly paid (see next bullet). Make money off a service / community / support associated with the program. This is one route I definitely don't want to take, I don't want any sort of hassle beyond coding. I assure you, the program is top notch (albeit simple) and I'm not aware of any bugs as of yet (there are support forums and blog comments where users may report them). It is also very simple, documented, and discoverable so I do think I have a case for supplying it "as is". Add affiliate suggestions to your installer. If you use a monetization company, you lose control over what they propose. Unless you can establish some sort of strong trust with the company to supply quality suggestions (I sincerely doubt it), I can't have that. Choosing your own affiliate (e.g. directly suggesting Google Toolbar) is possibly the only viable solution to my mind. Problem is, where do I find a solid affiliate that could actually give value to the user rather than infect his computer with crapware? I thought maybe Babylon (not the toolbar of course, I hate toolbars)?

    Read the article

  • How to translate along Z axis in OpenTK

    - by JeremyJAlpha
    I am playing around with an OpenGL sample application I downloaded for Xamarin-Android. The sample application produces a rotating colored cube I would simply like to edit it so that the rotating cube is translated along the Z axis and disappears into the distance. I modified the code by: adding an cumulative variable to store my Z distance, adding GL.Enable(All.DepthBufferBit) - unsure if I put it in the right place, adding GL.Translate(0.0f, 0.0f, Depth) - before the rotate functions, Result: cube rotates a couple of times then disappears, it seems to be getting clipped out of the frustum. So my question is what is the correct way to use and initialize the Z buffer and get the cube to travel along the Z axis? I am sure I am missing some function calls but am unsure of what they are and where to put them. I apologise in advance as this is very basic stuff but am still learning :P, I would appreciate it if anyone could show me the best way to get the cube to still rotate but to also move along the Z axis. I have commented all my modifications in the code: // This gets called when the drawing surface is ready protected override void OnLoad (EventArgs e) { // this call is optional, and meant to raise delegates // in case any are registered base.OnLoad (e); // UpdateFrame and RenderFrame are called // by the render loop. This is takes effect // when we use 'Run ()', like below UpdateFrame += delegate (object sender, FrameEventArgs args) { // Rotate at a constant speed for (int i = 0; i < 3; i ++) rot [i] += (float) (rateOfRotationPS [i] * args.Time); }; RenderFrame += delegate { RenderCube (); }; GL.Enable(All.DepthBufferBit); //Added by Noob GL.Enable(All.CullFace); GL.ShadeModel(All.Smooth); GL.Hint(All.PerspectiveCorrectionHint, All.Nicest); // Run the render loop Run (30); } void RenderCube () { GL.Viewport(0, 0, viewportWidth, viewportHeight); GL.MatrixMode (All.Projection); GL.LoadIdentity (); if ( viewportWidth > viewportHeight ) { GL.Ortho(-1.5f, 1.5f, 1.0f, -1.0f, -1.0f, 1.0f); } else { GL.Ortho(-1.0f, 1.0f, -1.5f, 1.5f, -1.0f, 1.0f); } GL.MatrixMode (All.Modelview); GL.LoadIdentity (); Depth -= 0.02f; //Added by Noob GL.Translate(0.0f,0.0f,Depth); //Added by Noob GL.Rotate (rot[0], 1.0f, 0.0f, 0.0f); GL.Rotate (rot[1], 0.0f, 1.0f, 0.0f); GL.Rotate (rot[2], 0.0f, 1.0f, 0.0f); GL.ClearColor (0, 0, 0, 1.0f); GL.Clear (ClearBufferMask.ColorBufferBit); GL.VertexPointer(3, All.Float, 0, cube); GL.EnableClientState (All.VertexArray); GL.ColorPointer (4, All.Float, 0, cubeColors); GL.EnableClientState (All.ColorArray); GL.DrawElements(All.Triangles, 36, All.UnsignedByte, triangles); SwapBuffers (); }

    Read the article

  • links for 2011-01-31

    - by Bob Rhubart
    Do (Software) Architects Architect? "The first question, is 'Why is architect being used as a verb?' Mirriam-Webster dictionary does not contain a definition of architect as a verb, nor do many other recognized dictionaries." -- TheCPUWizard (tags: softwarearchitecture) Oracle Business Intelligence Blog: Gartner Magic Quadrant for BI Platforms 2011 "Oracle customers indicate they deploy the Oracle Business Intelligence Suite Enterprise Edition (OBIEE) platform to support among the most complex deployments in our survey." - Gartner (tags: oracle businessintelligence gartner) Oracle BI Server Modeling, Part 1- Designing a Query Factory (Oracle BI Foundation) Bob Ertl lays the groundwork for Business Intelligence modeling concepts with a look at "the big picture of how the BI Server fits into the system, and how the CEIM controls the query processing." (tags: oracle otn businessintelligence) Tom Graves: Modelling people in enterprise-architecture Tom says: "One of the key characteristics of ‘crossing the chasm’ to a viable whole-of-enterprise architecture is the explicit inclusion of people. In short, we need to be able to model and map where people fit in relation to the architecture. But there’s a catch. A big catch." (tags: entarch) Java developer webcasts for customers and partners (SOA Partner Community Blog) Jurgen Kress shares info on several upcoming online events focused on WebLogic. (tags: weblogic oracle otn soa) Business SOA: Data Services are bogus, Information services are real Steve Jones says: "The other day when I was talking about MDM a bright spark pointed out that I hated data services but wasn't MDM just about data services?" (tags: SOA MDM) Andrejus Baranovskis's Blog: Configuring Missing Contribution Folders for Oracle UCM 11g and WebCenter 11g PS3 Andrejus says: "After doing some research on UCM, we found that Folders_g component must be configured in UCM, for Contribution Folders to be enabled." (tags: oracle otn oracleace UCM webcenter enterprise2.0) Wim Coekaerts: Converting an Oracle VM VirtualBox VM into an Oracle VM Server image Wim Coekaerts offers a few simple steps to convert an existing Oracle VM VirtualBox image.  (tags: oracle otn virtualization virtualbox) Stefan Hinker: Secure Deployment of Oracle VM Server for SPARC This new paper from Stefan Hinker will help you understand the general security concerns in virtualized environments as well as the specific additional threats that arise out of them. (tags: oracle otn SPARC virtualization enterprisearchitecture) The EA Roadmap to Rationalize, Standardize, and Consolidate the IT Portfolio Enterprise IT is in a state of constant evolution. As a result, business processes and technologies become increasingly more difficult to change and more costly to keep up-to-date. (tags: entarch oracle otn)

    Read the article

  • Auto-run script when iPad plugged in

    - by oldmankit
    The way that Ubuntu handles documents on the iPad is awesome (without any configuration required). It beats windows, even after you install iTunes. I want to have the documents in certain iPad apps automatically synced into my Dropbox directory whenever the iPad is connected by USB. The syncing is easy; getting the script to run is not. I have already read the information in various (very out-of-date) tutorials. The best I could find was here: http://askubuntu.com/a/25091/16157 I used lsusb, with the following results: Bus 002 Device 012: ID 05ac:12a2 Apple, Inc. (Please note that when an iPad is connected, Ubuntu seems to mount it to two different mount points: one for "Documents" and one for the whole iPad filesystem. They are both mounted in ~/.gvfs) I have created the following file /etc/udev/rules.d/96-ipad_sync.rules ACTION=="add", ATTRS{idVendor}=="05ac", ATTRS{idProduct}=="12a2", RUN+="/home/kit/bin/jobdone2" I want it to run a test script (which sleeps for five seconds then plays an mp3 file. The test script works, and I have typed the location correctly). So far, when I plug the iPad in, nothing happens. Yes, I waited five seconds. This is the output I get from typing udevadm monitor –env KERNEL[29348.114010] add /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.4 (usb) KERNEL[29348.114844] add /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.4/2-1.4:1.0 (usb) KERNEL[29348.129118] remove /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.4/2-1.4:1.0 (usb) KERNEL[29348.130699] add /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.4/2-1.4:4.0 (usb) KERNEL[29348.130845] add /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.4/2-1.4:4.1 (usb) KERNEL[29348.130909] add /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.4/2-1.4:4.2 (usb) UDEV [29348.163861] add /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.4 (usb) UDEV [29348.170390] add /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.4/2-1.4:1.0 (usb) UDEV [29348.171521] add /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.4/2-1.4:4.1 (usb) UDEV [29348.172230] remove /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.4/2-1.4:1.0 (usb) UDEV [29348.172890] add /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.4/2-1.4:4.2 (usb) UDEV [29348.175645] add /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.4/2-1.4:4.0 (usb)

    Read the article

  • SQL SERVER – Unable to DELETE Project in Data Quality Projects (DQS)

    - by pinaldave
    Here is the email which made me write this blog post. When I write a blog post I write keeping in mind that if the developer is not familiar with the concept he will attempt this on the development server. If due to any reason you attempt it on any other server than your personal server, developer should make sure to have complete confidence on his own expertise and understand the risk behind it.  Well, let us read the email which I received. I have modified it a bit to remove information related to organizational and individual. “I just read your blog post on Beginning DQS. I went ahead and followed every single screenshot and it worked fine. I was able to execute the DQS project successfully. However, the same blog post got me in trouble – a serious trouble. After first successful deployment I went ahead and created a few of my own knowledge base and projects. I played around a bit and then decided to get back to real work. Now we had deployed DQS on production server only, so experiment on production server. Now, when I got back to my work, I forgot to close all the windows. My manager found the window open and have seen my test projects. He has asked me to delete my experiments immediately and have said words which I cannot write to you. Here is the problem. I am not able to delete the project which I have created earlier. I am able to open it and play with it but the delete option is disabled and grayed out (see attached image). Now I believe there is nothing wrong with this project as it was just a test project. Would you please write to my manager that it is not harmful to leave that project there as it is? It is also not using any resources. I think he will believe you.” As I said this kind of email makes me uncomfortable. I do not want someone to execute anything on production server. I often write notes and disclaimer on my post when something is dangerous to execute on production server. However, if someone is not expert with SQL Server and attempts something new on production server, I think the major issue is here with the person (admin) who gave new developer permission to production server. This has to be carefully avoided. Here was my response to the individual. “I cannot write to your manager anything as he has not asked me anything. Honestly I believe he is correct in his behavior as you should have not executed anything on the production server without prior approval and testing on the development server. Any R&D must be done on local box or development box. I suggest you request your manager to prevent access to users who does not need access. If he is a good manager, he might have already implemented by now recent event. I also see your screenshot. Here is the issue: While you were playing with project, you might have closed the project half the way, without completing it. Due to the same reason it is locked. You can open and continue from the same place where you have left the project. If you do not need the project any more. Right click on it, click on unlock the project. This will enable the DELETE option and now you can delete the project. Next time, be safe out there. It may be dangerous to have admin access to production server when not needed.“ I have yet not heard from him but I believe he will take my words positively. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Data Quality Services, DQS

    Read the article

  • Newsletter sent with drupal goes to Spam Folder [closed]

    - by HerrSerker
    Possible Duplicate: How could I prevent my mail from being recognized as spam? I'm sending a newsletter with drupals simplenews module The website is hosted on an 1und1 server in germany (as seen in in header domains online.de and kundenserver.de) When I send it, it goes to Spam folder in Yahoo & GMail Mailbox, but not in Spam Folder in web.de, hotmail and GMX Mailboxes Here is, what I have in the Mail Header (for yahoo in this example) Received: from 12.345.678.90 (EHLO sXXXXXXXXX.online.de) (12.345.678.90) by mtaXXX.mail.kks.yahoo.co.jp with SMTP; Fri, 15 Jun 2012 18:45:24 +0900 Received: from [127.0.0.1] (helo=infongdXXXXX.rtr.kundenserver.de) by sXXXXXXXXX.online.de with esmtp (Exim 4.72) (envelope-from <[email protected]>) id 1SfT5k-00068r-Q8 for [email protected]; Fri, 15 Jun 2012 11:45:20 +0200 Received: from 83.136.130.41 (IP may be forged by CGI script) by infongdXXXXX.rtr.kundenserver.de with HTTP id 0Z04SW-1SQTKp3LPr-00YxYk; Fri, 15 Jun 2012 11:45:20 +0200 From: SENDER <[email protected]> To: "[email protected]" <[email protected]> Date: Fri, 15 Jun 2012 11:45:20 +0200 Subject: This is the subject of the newsletter Thread-Topic: This is the subject of the newsletter Thread-Index: Ac1K3nT42juzo7uCSkq5dTlby1ZvpQ== List-Unsubscribe: <http://www.example.com/newsletter/confirm/remove/XXXXXXXXX> X-MS-Has-Attach: X-Auto-Response-Suppress: All X-MS-TNEF-Correlator: x-originating-ip: [12.345.678.90] authentication-results: mtaXXX.mail.kks.yahoo.co.jp from=example.com; domainkeys=neutral (no sig); dkim=neutral (no sig) [email protected] errors-to: "SENDER" <[email protected]> received-spf: none (sXXXXXXXXX.online.de: domain of [email protected] does not designate permitted sender hosts) x-apparently-to: [email protected] via 123.45.67.890; Fri, 15 Jun 2012 18:45:25 +0900 x-sender-info: <[email protected]> content-length: 13762 Content-Type: multipart/alternative; boundary="_000_7471797868716571796675707173696675806577726778666766687_" MIME-Version: 1.0 I cannot see any direct spam filter message in this. But I'm kind of stunned by the Received: from 83.136.130.41 (IP may be forged by CGI script) part. After I searched a bit, it seems, that this is a special 'feature' of 1und1 Mail servers. Here are my questions: Is it possible that, if I get rid of the 'Ip maybe forged' part, that the Mail is not regarded as spam anymore? If so, Does anyone know, how I can get rid of it in drupal?

    Read the article

  • How to leverage the internal HTTP endpoint available on Azure web roles?

    - by Alfredo Delsors
    Imagine you have a Web application using an in-memory collection that changes occasionally but is used very often. The collection gets loaded from storage on the Application_Start global.asax event and is updated whenever its content changes. If you want to deploy this application on Azure you need to keep in mind that more than one instance of the application can be running at any time and therefore you need to provide some mechanism to keep all instances informed with the latest changes. Because the communication through internal endpoints between Azure role instances is at no cost, a good solution can be maintaining the information on Azure Storage Tables, reading its contents on the Application_Start event and populating its changes to all other instances using the internal HTTP port available on Azure Web Roles. You need to follow these steps to leverage the internal HTTP endpoint available on Azure web roles to maintain all instances up to date. 1.   Define an internal HTTP endpoint in the Web Role properties, for example InternalHttpEndpoint   2.   Add a new WCF service to the Web Role, for example NotificationService.svc 3.   Disable multiple site bindings in web.config: <serviceHostingEnvironment multipleSiteBindingsEnabled="false"> 4.   Add a method on the new service to receive notifications from other role instances. namespace Service { [ServiceContract] public interface INotificationService { [OperationContract(IsOneWay = true)] void Notify(Information info); } } 5.   Declare a class that inherits from System.ServiceModel.Activation.ServiceHostFactory and override the method CreateServiceHost to host the internal endpoint. public class InternalServiceFactory : ServiceHostFactory { protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) { var internalEndpointAddress = string.Format( "http://{0}/NotificationService.svc", RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["InternalHttpEndpoint"].IPEndpoint); ServiceHost host = new ServiceHost( typeof(NotificationService), new Uri(internalEndpointAddress)); BasicHttpBinding binding = new BasicHttpBinding(SecurityMode.None); host.AddServiceEndpoint( typeof(INotificationService), binding, internalEndpointAddress); return host; } } Note that you can use SecurityMode.None because the internal endpoint is private to the instances of the service. 6.   Edit the markup of the service right clicking the svc file and selecting "View markup" to add the new factory as the factory to be used to create the service <%@ ServiceHost Language="C#" Debug="true" Factory="Service.InternalServiceFactory" Service="Service.NotificationService" CodeBehind="NotificationService.svc.cs" %> 7.   Now you can notify changes to other instances using this code: var current = RoleEnvironment.CurrentRoleInstance; var endPoints = current.Role.Instances .Where(instance => instance != current) .Select(instance => instance.InstanceEndpoints["InternalHttpEndpoint"]); foreach (var ep in endPoints) { EndpointAddress address = new EndpointAddress( String.Format("http://{0}/NotificationService.svc", ep.IPEndpoint)); BasicHttpBinding binding = new BasicHttpBinding(SecurityMode.None); var factory = new ChannelFactory<INotificationService>(binding); INotificationService instance = factory.CreateChannel(address); instance.Notify(changedinfo); }

    Read the article

  • Business Insight, IT Execution: 9 Project Management Tips

    - by Sylvie MacKenzie, PMP
    Excerpt from Profit Magazine - by David Rosenbaum When Marcos Baccetto was first asked to be the business-side project lead on Eaton Corporation’s Vehicle Group South America (VGSA) Oracle project, the operations services manager responsible for running manufacturing was, he confesses, “a little afraid” because of his lack of IT experience. Today, Baccetto calls the project “a fantastic experience,” and he is a true believer in the benefits of a close relationship between IT implementers and their line-of-business peers. Through his partnership with Jesiele Lima, then VGSA IT manager, Baccetto and Eaton’s South American operations team came to understand several important principles of business and IT. Here he shares nine tips managers should consider when working on an enterprise technology project. 1. Make it a business project, not an IT project. All levels of functional management must have ownership, responsibility, and accountability for the success of the implementation. 2. Share responsibility. Business owners should sign off on tests and data conversion. 3. Clean your data. Dedicating a team to improve core data quality prior to project launch can be a significant time-saver. 4. Select resources properly. Have functional people who can translate business needs to IT and can influence organizational change. 5. Manage scope. Follow project management methodologies and disciplines. 6. Adopt common processes, global solutions. Avoid customized, local solutions. The big-picture business goals can get lost in the details. 7. Implement processes prior to the go-live date. Change management can be key. Keep the workforce informed and train users in advance. 8. Define metrics milestones. Assume there will be a crisis during deployment. Having baseline metrics to compare against will help implementers keep their cool—and the project moving forward. 9. The sponsor’s commitment is critical. It is needed to support the truly difficult decisions.

    Read the article

  • External hard drive not recognized

    - by sr71
    Installed Ubuntu 10.10 on a hard drive by itself (no windows). All updates are up to date. Everything`works fine except when I plug in a 320 gig Toshiba external hard drive....it is not recognized. When I plug in an 8 gig flash drive it is recognized no problem. What do I mean by "not recognized"? I mean: how do you know it was not recognized. You can check the command "cat /proc/partitions" in a terminal with and without the hdd attached. If you see some difference, it's ok. If the difference is something like "sda1" (sd+letter+number) then you have partition on it, maybe it can't be handled in Ubuntu (no filesystem on it or so). If there is only "sda" in the difference for example (sd+letter, but no number after it) then the drive itself is detected, just no partition is created on it. Also you can check out the messages of the kernel, with the "dmesg" command in the terminal. If there is no disk/partition/anything in /proc/partitions, there can be an USB level problem, you can issue command "lsusb" as it was suggested before my answer. It's really matter of what do you mean about "not recognized". @Marco Ceppi @Oli @Pitto By "not recognized" I meant that when I plug in a 8 gig flash drive an icon immediately appears on the desktop imdicating that there is a flash drive plugged in and I can click on it and view the files on the flash drive. It also shows up on the "Nautilus" default file manager. When I plug in the 320 gig Toshiba external hard drive, I get no indication on the desktop or the file manager (thus "not recogized"). When I run the command cat/proc/partitions/ I get an error message Could not open location 'file:///home/bob/cat/proc/partitions' with or without the external hard drive installed. With the dmeg command I get about 10 pages of info with no mention of disk/partition/anything in /proc/partitions. When I run lsusb command I get the following: bob@bob-desktop:~$ lsusb Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 003: ID 04b3:3003 IBM Corp. Rapid Access III Keyboard Bus 003 Device 002: ID 04b3:3004 IBM Corp. Media Access Pro Keyboard Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

    Read the article

  • Upgrading 10.04LTS -> 10.10 using custom sources

    - by Boatzart
    I'm trying to upgrade to 10.10 from 10.04 LTS using a custom sources.list file that points to an unofficial mirror*. The mirror does have maverick, but I get the following output when upgrading: boatzart@somecomputer: > sudo do-release-upgrade Checking for a new ubuntu release Done Upgrade tool signature Done Upgrade tool Done downloading extracting 'maverick.tar.gz' authenticate 'maverick.tar.gz' against 'maverick.tar.gz.gpg' tar: Removing leading `/' from member names Reading cache Checking package manager Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Updating repository information WARNING: Failed to read mirror file No valid mirror found While scanning your repository information no mirror entry for the upgrade was found. This can happen if you run a internal mirror or if the mirror information is out of date. Do you want to rewrite your 'sources.list' file anyway? If you choose 'Yes' here it will update all 'lucid' to 'maverick' entries. If you select 'No' the upgrade will cancel. Continue [yN] y WARNING: Failed to read mirror file 96% [Working] Checking package manager Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Calculating the changes Calculating the changes Could not calculate the upgrade An unresolvable problem occurred while calculating the upgrade: The package 'update-manager-kde' is marked for removal but it is in the removal blacklist. This can be caused by: * Upgrading to a pre-release version of Ubuntu * Running the current pre-release version of Ubuntu * Unofficial software packages not provided by Ubuntu If none of this applies, then please report this bug against the 'update-manager' package and include the files in /var/log/dist-upgrade/ in the bug report. Restoring original system state Aborting Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Here is the relevant section from /var/log/dist-upgrade/main.log: 2010-11-18 14:05:52,117 DEBUG The package 'update-manager-kde' is marked for removal but it's in the removal blacklist 2010-11-18 14:05:52,136 ERROR Dist-upgrade failed: 'The package 'update-manager-kde' is marked for removal but it is in the removal blacklist.' 2010-11-18 14:05:52,136 DEBUG abort called *I'm located inside of USC, and for some crazy reason any sustained downloads to anywhere outside of the University are throttled down to 5kbps inside of my lab. Because of this I need to use the following sources.list: deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid main restricted universe multiverse deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid-updates main restricted universe multiverse deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid-backports main restricted universe multiverse deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid-security main restricted universe multiverse I've tried adding four more entries to the sources.list with s/lucid/maverick/ but that didn't help. Does anyone know how to fix this? Thanks!

    Read the article

  • Using HBase or Cassandra for a token server

    - by crippy
    I've been trying to figure out how to use HBase/Cassandra for a token system we're re-implementing. I can probably squeeze quite a lot more from MySQL, but it just seems it has come to clinging on to the wrong tool for the task just because we know it well. Eventually will hit a wall (like happened to us in other areas). Naturally I started looking into possible NoSQL solutions. The prominent ones (at least in terms of buzz) are HBase and Cassandra. The story is more or less like this: A user can send a gift other users. Each gift has a list of recipients or is public in which case limited by number or expiration date For each gift sent we generate some token that uniquely identifies that gift. For each gift we track the list of potential recipients and their current status relating to that gift (accepted, declinded etc). A user can request to see all his currently pending gifts A can request a list of users he has sent a gift to today (used to limit number of gifts sent) Required the ability to "dump" or "ignore" expired gifts (x day old gifts are considered expired) There are some other requirements but I believe the above covers the essentials. How would I go and model that using HBase or Cassandra? Well, the wall was performance. A few 10s of millions of records per day over 2 tables kept for 2 weeks (wish I could have kept it for more but there was no way). The response times kept getting slower and slower until eventually we had to start cutting down number of days we kept data. Caching helps here but it's not an ideal solution since a big part of the ops are updates. Also, as I hinted in my original post. We use MySQL extensively. We know exactly what it can and can't do both in naive implementations followed by native partitioning and finally by horizontally sharding our dataset on the application level to reside on multiple DB nodes. It can be done, but that's not really what I'm trying to get from this. I asked a very specific question about designing a solution using a NoSQL solution since it's very hard to find examples for designs out there. Brainlag, not trying to come off as rude. I actually appreciate it a lot that you are the only one who even bothered to respond. but I see it over and over again. People ask questions and others assume they have no idea what they're talking about and give an irrelevant answer. Ignore RDBMS please. The question is about nosql.

    Read the article

  • Why should I not do a masters degree

    - by aurel
    I have left university on July 2010 where I studied web design (as we all know you learn more by your self but that’s not the issue at the moment). Since then I have not managed to find a job (apart from a one month work experience), from the way things are going, and by taking into account the fact that all my university friends are in the same situation, I don’t think that I am going to find a job soon (within the industry) Now as we all do, even though I don’t have a job, I am still working on personal projects and try to keep up to date (I don’t need a job or uni to do this) – but I am thinking, because there is not work available, would it be worth going back to uni for a master degree? I know I don’t need it and I know that is unlikely that I will learn anything important, as I believe in self learning, and in most cases it is a lot more effective (but I have to say I don’t mind going back to school) The only reason I am thinking of doing the master is, (and this is where I need your help): If it takes me a year to get a job, then on the interview, would the employer think “what the hell did this guy do since he left university” – now if I go to university that would solve this problem. Or I’m I making up a problem that does not exist Plus, I know that employers need examples of sites that I have been working on, at the moment I only have 3 (as when working on personal projects, where their is not time limit I tend drag things in order to get them perfect, and they never get perfect) – so by going back to uni, then this problem maybe solved I said all this as I have read a lot about the fact that you don’t need to have a masters degree to work on web design market (and I totally agree) but considering my concern, the question is should I do a masters course to avoid just spending hours in my room working and learning in my own (but that it would be hard to convince employers that I was really learning in my room) Maybe because I’m still young age 22 not that old anyway :), but I don’t have the “dream” of being rich, so if I were to tell the truth I don’t really care of the fact that I don’t have a job (at the moment), because regardless, I am working on what I love every day, but I know that in the future when I will need the job I may find it harder to get one, if I neglect doing so now Every time I ask a question that I’m not sure about, I keep going on and on, but I really hope you get what I am trying to get across. By the way the course that I am looking at for a masters says that it would teach me how to do these: e-commerce e-government e-science e-learning I don’t know any of them, a part from e-commerce Thanks

    Read the article

  • Scenarios for Throwing Exceptions

    - by Joe Mayo
    I recently came across a situation where someone had an opinion that differed from mine of when an exception should be thrown. This particular case was an issue opened on LINQ to Twitter for an Exception on EndSession.  The premise of the issue was that the poster didn’t feel an exception should be raised, regardless of authentication status.  As first, this sounded like a valid point.  However, I went back to review my code and decided not to make any changes. Here's my rationale: 1. The exception doesn’t occur if the user is authenticated when EndAccountSession is called. 2. The exception does occur if the user is not authenticated when EndAccountSession is called. 3. The exception represents the fact that EndAccountSession is not able to fulfill its intended purpose - to end the session.  If a session never existed, then it would not be possible to perform the requested action.  Therefore, an exception is appropriate. To help illustrate how to handle this situation, I've modified the following code in Program.cs in the LinqToTwitterDemo project to illustrate the situation: static void EndSession(ITwitterAuthorizer auth) { using (var twitterCtx = new TwitterContext(auth, "https://api.twitter.com/1/", "https://search.twitter.com/")) { try { //Log twitterCtx.Log = Console.Out; var status = twitterCtx.EndAccountSession(); Console.WriteLine("Request: {0}, Error: {1}" , status.Request , status.Error); } catch (TwitterQueryException tqe) { var webEx = tqe.InnerException as WebException; if (webEx != null) { var webResp = webEx.Response as HttpWebResponse; if (webResp != null && webResp.StatusCode == HttpStatusCode.Unauthorized) Console.WriteLine("Twitter didn't recognize you as having been logged in. Therefore, your request to end session is illogical.\n"); } var status = tqe.Response; Console.WriteLine("Request: {0}, Error: {1}" , status.Request , status.Error); } } } As expected, LINQ to Twitter wraps the exception in a TwitterQueryException as the InnerException.  The TwitterQueryException serves a very useful purpose through it's Response property.  Notice in the example above that the response has Request and Error proprieties.  These properties correspond to the information that Twitter returns as part of it's response payload.  This is often useful while debugging to help you understand why Twitter was unable to perform the  requested action.  Other times, it's cryptic, but that's another story.  At least you have some way of knowing in your code how to anticipate and handle these situations, along with having extra information to debug with. To sum things up, there are two points to make: when and why an exception should be raised and when to wrap and re-throw an exception in a custom exception type. I felt it was necessary to allow the exception to be raised because the called method was unable to perform the task it was designed for.  I also felt that it is inappropriate for a general library to do anything with exceptions because that could potentially hide a problem from the caller.  A related point is that it should be the exclusive decision of the application that uses the library on what to do with an exception.  Another aspect of this situation is that I wrapped the exception in a custom exception and re-threw.  This is a tough call because I don’t want to hide any stack trace information.  However, the need to make the exception more meaningful by including vital information returned from Twitter swayed me in the direction to design an interface that was as helpful as possible to library consumers.  As shown in the code above, you can dig into the exception and pull out a lot of good information, such as the fact that the underlying HTTP response was a 401 Unauthorized.  In all, trade-offs are seldom perfect for all cases, but combining the fact that the method was unable to perform its intended function, this is a library, and the extra information can be more helpful, it seemed to be the better design. @JoeMayo

    Read the article

  • Modelling highly specific business requirements

    - by AndyBursh
    How can one go about modelling highly specific business requirements, which have no precedent in the system? Take for example the following requirement: When a purchase order contains N lines, is over X value in total and is being recorded against project Y, an email needs to be sent to persons A and B with the details This requirement supplements other requirements surrounding purchase orders, but comes in at a much later date in response to some ongoing problem elsewhere in the business. Persons A and B are not part of any role or group in the system, and don't hold any specific responsibility; they are simply the two people the business has appointed to receive these emails in this very specific case. Projects are also data driven, so project Y has no special properties to distinguish it from any other project. The only way to identify it is to compare its identifier to a magic number. How can one go about modelling this kind of case without introducing too much additional complexity? That I can think of right now, there are a couple of options. Perform the checks and actions inline with the existing code. Here we find the correct spot in the code, check the conditions in the requirement and send the emails to hardcoded addresses. Of course this is fraught with issues. At the very least it stops working if one of these people leaves or changes their email address. At worst you have to ensure that any tests and test data are aware that additional actions are taken for a specific set of criteria. Introduce some form of events system. Here we introduce an eventing system, so that we might react to some event, and fulfil the requirement outside of the usual path of execution. This sounds like a cleaner solution than option 1, but the work involved is ultimately probably slightly overkill for this one small requirement. That said, having it in place does allow the system to handle these kinds of specific requirements consistently and easily in the future. Are there any other (good/better) ways of handling highly specific requirements? I mean other than telling the other parts of the business no!

    Read the article

  • JDK bug migration milestone: JIRA now the system of record

    - by darcy
    I'm pleased to announce the OpenJDK bug database migration project has reached a significant milestone: the JDK has switched from the legacy Sun "bugtraq" system to a new internal JIRA instance as the system of record for our bug tracking. This completes the initial phase of the previously described plan of getting OpenJDK onto an externally visible and writable bug tracker. The identities contained in the current system include recognized OpenJDK contributors. The bug migration effort to date has been sizable in multiple dimensions. There are around 140,000 distinct issues imported into the JDK project of the JIRA instance, nearly 165,000 if backport issues to track multiple-release information are included. Separately, the Code Tools OpenJDK project has its own JIRA project populated with several thousands existing bugs. Once the OpenJDK JIRA instance is externalized, approved OpenJDK projects will be able to request the creation of a JIRA project for issue tracking. There are many differences in the schema used to model bugs between the legacy bug system and the schema for the new JIRA projects. We've favored simplifications to the existing system where possible and, after much discussion, we've settled on five main states for the OpenJDK JIRA projects: New Open In progress Resolved Closed The Open and In-progress states can have a substate Understanding field set to track whether the issues has its "Cause Known" or "Fix understood". In the closed state, a Verification field can indicate whether a fix has been verified, unverified, or if the fix has failed. At the moment, there will be very little externally visible difference between JIRA for OpenJDK and the legacy system it replaces. One difference is that bug numbers for newly filed issues in the JIRA JDK project will be 8000000 and above. If you are working with JDK Hg repositories, update any local copies of jcheck to the latest version which recognizes this expanded bug range. (The bug numbers of existing issues have been preserved on the import into JIRA). Relatively soon, we plan for the pages published on bugs.sun.com to be generated from information in JIRA rather than in the legacy system. When this occurs, there will be some differences in the page display and the terminology used will be revised to reflect JIRA usage, such as referring to the "component/subcomponent" of an issue rather than its "category". The exact timing of this transition will be announced when it is known. We don't currently have a firm timeline for externalization of the JIRA system. Updates will be provided as they become available. However, that is unlikely to happen before JavaOne next week!

    Read the article

  • MatheMagics - Guess My Age Method 1

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved MatheMagic – Guess My Age – Method 1 The Mathemagician stands on the stage and asks an adult to do the following: ·         Do the next few steps on your calculator, or the calculator in your phone, or even on a piece of paper. ·         Do it silently! Don’t tell me the results until I ask for them directly ·         Compute a single digit multiple of 9 – any one of 9, 18, 27, … all the way to 81, will do. ·         Now multiply your age by 10 ·         Subtract the 9 multiple from this number. ·         Tell me the result. Notice that I don’t know which multiple of 9 you subtracted from 10 times your age. I will nonetheless immediately tell you what your age is. How do I do this? Let’s do the algebra. 10X – 9Y = 10X – 10Y + Y = 10(X – Y) + Y Now remember, you asked an adult, so his/her age is a two digit number (maybe even 3 digits), thus reducing it by the single digit multiplied by nine is still positive – the lowest is can be is 100 – 81 which yields 19. Now make two numbers out of the result. The last digit and the number before it. This number is X – Y or the age minus the single digit you selected. The last digit is this very single digit. This is always so regardless of the digit you selected. So… Add tis digit to the other number and you get back the age! Q.E.D Example: I am 76 years old and here is what happens when I do the steps 76 x 10 = 760 760 – 18 = 742 made of 74 and 2. My age is 74 + 2 760 – 81 = 679 made of 67 and 9. My age is 67 + 9 A note to the socially aware mathemagician – it is safer to do it with a man. The chances of a veracious answer are much, much higher! The trick may be accomplished on any 2 or 3 digit number, not just one’s age, but if you want to know your date’s age, it’s a good way to elicit it. That’s All Folks PS for more Ageless “Age” mathemagics go to www.mgsltns.com/games.htm and also here: http://geekswithblogs.net/PointsToShare/archive/2011/11/15/mathemagics---guess-my-age---method-2.aspx

    Read the article

  • Ubuntu 10.10 not recognizing external hard drive

    - by sr71
    Installed Ubuntu 10.10 on a hard drive by itself (no windows). All updates are up to date. Everything`works fine except when I plug in a 320 gig Toshiba external hard drive....it is not recognized. When I plug in an 8 gig flash drive it is recognized no problem. What do I mean by "not recognized"? I mean: how do you know it was not recognized. You can check the command "cat /proc/partitions" in a terminal with and without the hdd attached. If you see some difference, it's ok. If the difference is something like "sda1" (sd+letter+number) then you have partition on it, maybe it can't be handled in Ubuntu (no filesystem on it or so). If there is only "sda" in the difference for example (sd+letter, but no number after it) then the drive itself is detected, just no partition is created on it. Also you can check out the messages of the kernel, with the "dmesg" command in the terminal. If there is no disk/partition/anything in /proc/partitions, there can be an USB level problem, you can issue command "lsusb" as it was suggested before my answer. It's really matter of what do you mean about "not recognized". @Marco Ceppi @Oli @Pitto By "not recognized" I meant that when I plug in a 8 gig flash drive an icon immediately appears on the desktop imdicating that there is a flash drive plugged in and I can click on it and view the files on the flash drive. It also shows up on the "Nautilus" default file manager. When I plug in the 320 gig Toshiba external hard drive, I get no indication on the desktop or the file manager (thus "not recogized"). When I run the command "cat/proc/partitions/" I get an error message Could not open location 'file:///home/bob/cat/proc/partitions' with or without the external hard drive installed. With the "dmeg" command I get about 10 pages of info with no mention of disk/partition/anything in /proc/partitions. When I run "lsub" command I get the following: bob@bob-desktop:~$ lsusb Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 003: ID 04b3:3003 IBM Corp. Rapid Access III Keyboard Bus 003 Device 002: ID 04b3:3004 IBM Corp. Media Access Pro Keyboard Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub bob@bob-desktop:~$

    Read the article

< Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >