Search Results

Search found 7914 results on 317 pages for 'valid xhtml'.

Page 245/317 | < Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >

  • Combining template method with strategy

    - by Mekswoll
    An assignment in my software engineering class is to design an application which can play different forms a particular game. The game in question is Mancala, some of these games are called Wari or Kalah. These games differ in some aspects but for my question it's only important to know that the games could differ in the following: The way in which the result of a move is handled The way in which the end of the game is determined The way in which the winner is determined The first thing that came to my mind to design this was to use the strategy pattern, I have a variation in algorithms (the actual rules of the game). The design could look like this: I then thought to myself that in the game of Mancala and Wari the way the winner is determined is exactly the same and the code would be duplicated. I don't think this is by definition a violation of the 'one rule, one place' or DRY principle seeing as a change in rules for Mancala wouldn't automatically mean that rule should be changed in Wari as well. Nevertheless from the feedback I got from my professor I got the impression to find a different design. I then came up with this: Each game (Mancala, Wari, Kalah, ...) would just have attribute of the type of each rule's interface, i.e. WinnerDeterminer and if there's a Mancala 2.0 version which is the same as Mancala 1.0 except for how the winner is determined it can just use the Mancala versions. I think the implementation of these rules as a strategy pattern is certainly valid. But the real problem comes when I want to design it further. In reading about the template method pattern I immediately thought it could be applied to this problem. The actions that are done when a user makes a move are always the same, and in the same order, namely: deposit stones in holes (this is the same for all games, so would be implemented in the template method itself) determine the result of the move determine if the game has finished because of the previous move if the game has finished, determine who has won Those three last steps are all in my strategy pattern described above. I'm having a lot of trouble combining these two. One possible solution I found would be to abandon the strategy pattern and do the following: I don't really see the design difference between the strategy pattern and this? But I am certain I need to use a template method (although I was just as sure about having to use a strategy pattern). I also can't determine who would be responsible for creating the TurnTemplate object, whereas with the strategy pattern I feel I have families of objects (the three rules) which I could easily create using an abstract factory pattern. I would then have a MancalaRuleFactory, WariRuleFactory, etc. and they would create the correct instances of the rules and hand me back a RuleSet object. Let's say that I use the strategy + abstract factory pattern and I have a RuleSet object which has algorithms for the three rules in it. The only way I feel I can still use the template method pattern with this is to pass this RuleSet object to my TurnTemplate. The 'problem' that then surfaces is that I would never need my concrete implementations of the TurnTemplate, these classes would become obsolete. In my protected methods in the TurnTemplate I could just call ruleSet.determineWinner(). As a consequence, the TurnTemplate class would no longer be abstract but would have to become concrete, is it then still a template method pattern? To summarize, am I thinking in the right way or am I missing something easy? If I'm on the right track, how do I combine a strategy pattern and a template method pattern? This is part of a homework assignment but I'm not looking to be gifted the answer, I have deliberately been very verbose in my question to show that I have thought about it before coming here to ask a question

    Read the article

  • The enterprise vendor con - connecting SSD's using SATA 2 (3Gbits) thus limiting there performance

    - by tonyrogerson
    When comparing SSD against Hard drive performance it really makes me cross when folk think comparing an array of SSD running on 3GBits/sec to hard drives running on 6GBits/second is somehow valid. In a paper from DELL (http://www.dell.com/downloads/global/products/pvaul/en/PowerEdge-PowerVaultH800-CacheCade-final.pdf) on increasing database performance using the DELL PERC H800 with Solid State Drives they compare four SSD drives connected at 3Gbits/sec against ten 10Krpm drives connected at 6Gbits [Tony slaps forehead while shouting DOH!]. It is true in the case of hard drives it probably doesn’t make much difference 3Gbit or 6Gbit because SAS and SATA are both end to end protocols rather than shared bus architecture like SCSI, so the hard drive doesn’t share bandwidth and probably can’t get near the 600MiBytes/second throughput that 6Gbit gives unless you are doing contiguous reads, in my own tests on a single 15Krpm SAS disk using IOMeter (8 worker threads, queue depth of 16 with a stripe size of 64KiB, an 8KiB transfer size on a drive formatted with an allocation size of 8KiB for a 100% sequential read test) I only get 347MiBytes per second sustained throughput at an average latency of 2.87ms per IO equating to 44.5K IOps, ok, if that was 3GBits it would be less – around 280MiBytes per second, oh, but wait a minute [...fingers tap desk] You’ll struggle to find in the commodity space an SSD that doesn’t have the SATA 3 (6GBits) interface, SSD’s are fast not only low latency and high IOps but they also offer a very large sustained transfer rate, consider the OCZ Agility 3 it so happens that in my masters dissertation I did the same test but on a difference box, I got 374MiBytes per second at an average latency of 2.67ms per IO equating to 47.9K IOps – cost of an 240GB Agility 3 is £174.24 (http://www.scan.co.uk/products/240gb-ocz-agility-3-ssd-25-sata-6gb-s-sandforce-2281-read-525mb-s-write-500mb-s-85k-iops), but that same drive set in a box connected with SATA 2 (3Gbits) would only yield around 280MiBytes per second thus losing almost 100MiBytes per second throughput and a ton of IOps too. So why the hell are “enterprise” vendors still only connecting SSD’s at 3GBits? Well, my conspiracy states that they have no interest in you moving to SSD because they’ll lose so much money, the argument that they use SATA 2 doesn’t wash, SATA 3 has been out for some time now and all the commodity stuff you buy uses it now. Consider the cost, not in terms of price per GB but price per IOps, SSD absolutely thrash Hard Drives on that, it was true that the opposite was also true that Hard Drives thrashed SSD’s on price per GB, but is that true now, I’m not so sure – a 300GByte 2.5” 15Krpm SAS drive costs £329.76 ex VAT (http://www.scan.co.uk/products/300gb-seagate-st9300653ss-savvio-15k3-25-hdd-sas-6gb-s-15000rpm-64mb-cache-27ms) which equates to £1.09 per GB compared to a 480GB OCZ Agility 3 costing £422.10 ex VAT (http://www.scan.co.uk/products/480gb-ocz-agility-3-ssd-25-sata-6gb-s-sandforce-2281-read-525mb-s-write-410mb-s-30k-iops) which equates to £0.88 per GB. Ok, I compared an “enterprise” hard drive with a “commodity” SSD, ok, so things get a little more complicated here, most “enterprise” SSD’s are SLC and most commodity are MLC, SLC gives more performance and wear, I’ll talk about that another day. For now though, don’t get sucked in by vendor marketing, SATA 2 (3Gbit) just doesn’t cut it, SSD need 6Gbit to breath and even that SSD’s are pushing. Alas, SSD’s are connected using SATA so all the controllers I’ve seen thus far from HP and DELL only do SATA 2 – deliberate? Well, I’ll let you decide on that one.

    Read the article

  • Where is the value of OEA

    - by [email protected]
    In a room full of architects, if you were to ask for the definition of enterprise architecture, or the importance thereof,  you are likely to get a number of varying view points ranging from,  a complete analysis of the digital assets of an organization,  to, a strategic alignment of business goals/objectives to IT initiatives.  Similiarily in a room full of senior business executives,  if you asked them how they see their IT groups and their effectiveness to align to business strategy,  you would get a myriad of responses,  ranging from, “a huge drain on our bottom line”, “always more expensive than budgeted”, “lack of agility,  by the time IT is ready,  my business strategy has changed”, and on the rare occurrence, “ a leader of innovation,  that is lock step with my business strategy”. However does this necessarily demonstrate the overall value of enterprise architecture.  Having a framework, and process is of critical importance to help produce a number of the artefacts that ultimately align technology goals and initiatives to business strategy,  however,  is that really where the value is?  I believe that first we need to understand the concept of value.  Value typically is a measure of sorts,  when we purchase a product it’s value is equivalent to the maximum amount that someone is willing to pay for the product,  however,  is the same equation valid in terms of the business value of enterprise architecture? Is the library of artefacts generated through a process/framework, inclusive of a strategic roadmap to realize the enterprise architecture where the value is? If we agree that enterprise architecture is the alignment of IT and IT assets to support business strategy, and by achieving our business strategy, we have we have increased the business value of the enterprise then;  it seems that, in order to really identify the true value of an enterprise architecture,  we need to understand how we measure business value .  A number of formal measurement methodologies exist for this purpose, business models, balanced scorecards, etc   After we have an understanding on how to measure the business value of each of the organizational units within an enterprise, then we understand how the enterprise architecture contributes to the success of business strategy,  and EXECUTE on the roadmap to implement, and deliver the IT initiatives that provide MEASUREABLE returns, As we analyse the value chain of each of the individual organizational units within the enterprise we may identify how that unit has performed by quantitatively measuring it proximity to achieving the goals defined by the business for each unit. However, It would appear that true business value (the aggregate of all of the business units in the value chain), is to some degree subjectively measured  as for public companies this lies in shareholder value,  as the true value, or be it, the maximum amount that someone would pay for shares of an organization.

    Read the article

  • Who Makes a Good Product Owner

    - by Robert May
    In general, the best product owners are those that care passionately about the customer of the product.  Note that I didn’t say about the product itself.  Actually, people that only care about the product, generally do not make good product owners.  Products only matter in relationship to their customers.  If a product doesn’t provide value to the customer, then the product has no value, no matter what a person might think of the product, and no matter what cool technologies exist inside of the product. A good product owner is also a good negotiator.  They recognize that many different priorities exist inside of a corporation, but that there can be only one list that developers work from.  A good product owner recognizes that its their job to help others around them prioritize (perhaps with a Product Council), but also understand that they alone have the final say about priorities and are willing to make the tough decisions required.  Deciding the priority between two perfectly valid stories is very difficult, especially when the stories are from two different departments! A good product owner is deeply interested in helping the team be successful.  They don’t seek to control the team, but instead seek to understand what the team can do and then work with the team to get the best product possible for the Customer.  A good product owner is never denigrating to team members, ever.  They recognize that such behavior would damage the trust that needs to be present between team members and product owners and will avoid it at all costs. In general, technical people (i.e. former or current developers) make poor product owners.  In their minds, they can’t separate implementation details from user functionality, so their stories end up sounding like implementation details.  For example, “The user enters their username on the password screen” is something that a technical product owner would write.  The proper wording for that story is “A user supplies the system with their credentials.”  Because technical people think different from the rest of the population, they are generally not a good fit. A good product owner is also a good writer.  Writing good stories demands good writing.  The art of persuasion, descriptiveness and just general good grammar are all required.  A good Product Owner must also be well spoken, since most of what will be conveyed will be conveyed with the spoken word, not just written word. A good product owner is a “People Person.”  They like talking to people and are very patient.  They don’t mind having questions repeated or fielding many questions, because they want to make sure that the ideas they’re conveying are properly understood so the customer gets the best product possible.  They are happy to answer any questions a team member may have and invite feedback and criticism of designs and stories, since they want a good product.  They really have little ego that gets in the way of building a great product. All of these qualities can be hard to find, but if you look close enough, you’ll find the right person in your organization.  Product owners can be found anywhere, not just in upper management.  Some of the best product owners are those that are very close to the customer.  In fact, check your customer support staff.  I’d bet that several great product owners are lurking there. Final note about what makes a good product owner.  You’re probably NOT going to find a good product owner in a manager, especially if they consider themselves a “Manager.”  Product owners don’t manage anything but the backlog, so be especially careful if the person you’re selecting for Product Owner is a manager. Up Next, “Messing with the Team.” Technorati Tags: Scrum,Product Owner

    Read the article

  • LexisNexis and Oracle Join Forces to Prevent Fraud and Identity Abuse

    - by Tanu Sood
    Author: Mark Karlstrand About the Writer:Mark Karlstrand is a Senior Product Manager at Oracle focused on innovative security for enterprise web and mobile applications. Over the last sixteen years Mark has served as director in a number of tech startups before joining Oracle in 2007. Working with a team of talented architects and engineers Mark developed Oracle Adaptive Access Manager, a best of breed access security solution.The world’s top enterprise software company and the world leader in data driven solutions have teamed up to provide a new integrated security solution to prevent fraud and misuse of identities. LexisNexis Risk Solutions, a Gold level member of Oracle PartnerNetwork (OPN), today announced it has achieved Oracle Validated Integration of its Instant Authenticate product with Oracle Identity Management.Oracle provides the most complete Identity and Access Management platform. The only identity management provider to offer advanced capabilities including device fingerprinting, location intelligence, real-time risk analysis, context-aware authentication and authorization makes the Oracle offering unique in the industry. LexisNexis Risk Solutions provides the industry leading Instant Authenticate dynamic knowledge based authentication (KBA) service which offers customers a secure and cost effective means to authenticate new user or prove authentication for password resets, lockouts and such scenarios. Oracle and LexisNexis now offer an integrated solution that combines the power of the most advanced identity management platform and superior data driven user authentication to stop identity fraud in its tracks and, in turn, offer significant operational cost savings. The solution offers the ability to challenge users with dynamic knowledge based authentication based on the risk of an access request or transaction thereby offering an additional level to other authentication methods such as static challenge questions or one-time password when needed. For example, with Oracle Identity Management self-service, the forgotten password reset workflow utilizes advanced capabilities including device fingerprinting, location intelligence, risk analysis and one-time password (OTP) via short message service (SMS) to secure this sensitive flow. Even when a user has lost or misplaced his/her mobile phone and, therefore, cannot receive the SMS, the new integrated solution eliminates the need to contact the help desk. The Oracle Identity Management platform dynamically switches to use the LexisNexis Instant Authenticate service for authentication if the user is not able to authenticate via OTP. The advanced Oracle and LexisNexis integrated solution, thus, both improves user experience and saves money by avoiding unnecessary help desk calls. Oracle Identity and Access Management secures applications, Juniper SSL VPN and other web resources with a thoroughly modern layered and context-aware platform. Users don't gain access just because they happen to have a valid username and password. An enterprise utilizing the Oracle solution has the ability to predicate access based on the specific context of the current situation. The device, location, temporal data, and any number of other attributes are evaluated in real-time to determine the specific risk at that moment. If the risk is elevated a user can be challenged for additional authentication, refused access or allowed access with limited privileges. The LexisNexis Instant Authenticate dynamic KBA service plugs into the Oracle platform to provide an additional layer of security by validating a user's identity in high risk access or transactions. The large and varied pool of data the LexisNexis solution utilizes to quiz a user makes this challenge mechanism even more robust. This strong combination of Oracle and LexisNexis user authentication capabilities greatly mitigates the risk of exposing sensitive applications and services on the Internet which helps an enterprise grow their business with confidence.Resources:Press release: LexisNexis® Achieves Oracle Validated Integration with Oracle Identity Management Oracle Access Management (HTML)Oracle Adaptive Access Manager (pdf)

    Read the article

  • SQL Server Transaction Marks: Restoring multiple databases to a common relative point

    - by Mladen Prajdic
    We’re all familiar with the ability to restore a database to point in time using the RESTORE WITH STOPAT statement. But what if we have multiple databases that are accessed from one application or are modifying each other? And over multiple instances? And all databases have different workloads? And we want to restore all of the databases to some known common relative point? The catch here is that this common relative point isn’t the same point in time for all databases. This common relative point in time might be now in DB1, now-1 hour in DB2 and yesterday in DB3. And we don’t know the exact times. Let me introduce you to Transaction Marks. When we run a marked transaction using the WITH MARK option a flag is set in the transaction log and a row is added to msdb..logmarkhistory table. When restoring a transaction log backup we can restore to either before or after that marked transaction. The best thing is that we don’t even need to have one database modifying another database. All we have to do is use a marked transaction with the same name in different database. Let’s see how this works with an example. The code comments say what’s going on. USE master GOCREATE DATABASE TestTxMark1GOUSE TestTxMark1GOCREATE TABLE TestTable1( ID INT, VALUE UNIQUEIDENTIFIER) -- insert some data into the table so we can have a starting pointINSERT INTO TestTable1SELECT ROW_NUMBER() OVER(ORDER BY number) AS RN, NULLFROM master..spt_valuesORDER BY RNSELECT *FROM TestTable1GO-- TAKE A FULL BACKUP of the databseBACKUP DATABASE TestTxMark1 TO DISK = 'c:\TestTxMark1.bak'GO USE master GOCREATE DATABASE TestTxMark2GOUSE TestTxMark2GOCREATE TABLE TestTable2( ID INT, VALUE UNIQUEIDENTIFIER)-- insert some data into the table so we can have a starting pointINSERT INTO TestTable2SELECT ROW_NUMBER() OVER(ORDER BY number) AS RN, NEWID()FROM master..spt_valuesORDER BY RNSELECT *FROM TestTable2GO-- TAKE A FULL BACKUP of our databseBACKUP DATABASE TestTxMark2 TO DISK = 'c:\TestTxMark2.bak'GO -- start a marked transaction that modifies both databasesBEGIN TRAN TxDb WITH MARK -- update values from NULL to random value UPDATE TestTable1 SET VALUE = NEWID(); -- update first 100 values from random value -- to NULL in different DB UPDATE TestTxMark2.dbo.TestTable2 SET VALUE = NULL WHERE ID <= 100;COMMITGO     -- some time goes by here -- with various database activity... -- We see two entries for marks in each database. -- This is just informational and has no bearing on the restore itself.SELECT * FROM msdb..logmarkhistory USE masterGO-- create a log backup to restore to mark pointBACKUP LOG TestTxMark1 TO DISK = 'c:\TestTxMark1.trn'GO-- drop the database so we can restore it backDROP DATABASE TestTxMark1GO USE masterGO-- create a log backup to restore to mark pointBACKUP LOG TestTxMark2 TO DISK = 'c:\TestTxMark2.trn'GO-- drop the database so we can restore it backDROP DATABASE TestTxMark2GO -- RESTORE THE DATABASE BACK BEFORE OUR TRANSACTION-- restore the full backup RESTORE DATABASE TestTxMark1 FROM DISK = 'c:\TestTxMark1.bak' WITH NORECOVERY;-- restore the log backup to the transaction markRESTORE LOG TestTxMark1 FROM DISK = 'c:\TestTxMark1.trn' WITH RECOVERY, -- recover to state before the transaction STOPBEFOREMARK = 'TxDb'; -- recover to state after the transaction -- STOPATMARK = 'TxDb';GO -- RESTORE THE DATABASE BACK BEFORE OUR TRANSACTION-- restore the full backup RESTORE DATABASE TestTxMark2 FROM DISK = 'c:\TestTxMark2.bak' WITH NORECOVERY;-- restore the log backup to the transaction markRESTORE LOG TestTxMark2 FROM DISK = 'c:\TestTxMark2.trn' WITH RECOVERY, -- recover to state before the transaction STOPBEFOREMARK = 'TxDb'; -- recover to state after the transaction -- STOPATMARK = 'TxDb';GO USE TestTxMark1-- we restored to time before the transaction -- so we have NULL values in our tableSELECT * FROM TestTable1 USE TestTxMark2-- we restored to time before the transaction -- so we DON'T have NULL values in our tableSELECT * FROM TestTable2   Transaction marks can be used like a crude sync mechanism for cross database operations. With them we can mark our databases with a common “restore to” point so we know we have a valid state between all databases to restore to.

    Read the article

  • SQL SERVER – What are Actions in SSAS and How to Make a Reporting Action

    - by Pinal Dave
    Actions are used for customized browsing and drilling of data for the end-user. It’s an event that a user can raise while accessing the cube data. They are used in cube browsers like excel and are triggered when a user in a client tool clicks on a particular member, level, dimension, cells or may be the cube itself.  For example a user might be able to see a reporting services report, open a web page or drill through to detailed information related to the cube data. Analysis server supports 3 types of actions :- Report Drill-through Standard Actions In this blog post, I will explain the Reporting  action. The objective of this action is to return a report with details of the product where the sales amount is greater than 1000 in cube browser analysis. You need to create a basic cube first with the facts and dimensions you want in the analysis. Following are the steps to create reporting action. Go to SQL server data tools and open the analysis services project. Navigate to actions and click on new reporting action. 2.) Specify the name of the action and choose target type as attribute members since we have to create the action on members for a attribute. 3.) Specify the Target object of your report action. Target object would be the dimension or attribute on which you want the report to appear. In our case it is product name. 4.) Next you have to define the condition on which you want the report link to appear. However, this is an optional feature. In this example we are specifying a condition, which will check if the sales amount is greater than 10,000. So, that the link appears only for those products where the defined condition is met. 5.) Next you have to specify the server name on which the report is present, report path  and the report format in which you want the report to appear. 6.) Additionally you can specify the parameters. As with conditional expression, the parameters should be a valid MDX expression. The parameter name should be same as the one defined in the report. 7.) Deploy your solution after you are done with specifying parameters and go to the cube browser. 8.) Click on the analyze in excel button, this will open your cube in excel 9.) Make an analysis which shows product names and their sales amount. 10.) Right click on a product where sales amount is greater than 10000 you will see the reporting action link. Click on that and you will be taken to your reporting services report. 11.) Clicking on the link will take you to the URL of the report. I created this report using report project wizard in SQL server data tools. So, this is how we can launch reports from a cube browser. Similarly you can open web pages, run applications and a number of  other tasks. Koenig Solutions offers SSAS training which contains all Analysis Services including Reporting in great detail. In my next blog post I will talk about drill-through actions. Author: Namita Sharma, Senior Corporate Trainer at Koenig Solutions. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSAS

    Read the article

  • Inside Red Gate - Experimenting In Public

    - by Simon Cooper
    Over the next few weeks, we'll be performing experiments on SmartAssembly to confirm or refute various hypotheses we have about how people use the product, what is stopping them from using it to its full extent, and what we can change to make it more useful and easier to use. Some of these experiments can be done within the team, some within Red Gate, and some need to be done on external users. External testing Some external testing can be done by standard usability tests and surveys, however, there are some hypotheses that can only be tested by building a version of SmartAssembly with some things in the UI or implementation changed. We'll then be able to look at how the experimental build is used compared to the 'mainline' build, which forms our baseline or control group, and use this data to confirm or refute the relevant hypotheses. However, there are several issues we need to consider before running experiments using separate builds: Ideally, the user wouldn't know they're running an experimental SmartAssembly. We don't want users to use the experimental build like it's an experimental build, we want them to use it like it's the real mainline build. Only then will we get valid, useful, and informative data concerning our hypotheses. There's no point running the experiments if we can't find out what happens after the download. To confirm or refute some of our hypotheses, we need to find out how the tool is used once it is installed. Fortunately, we've applied feature usage reporting to the SmartAssembly codebase itself to provide us with that information. Of course, this then makes the experimental data conditional on the user agreeing to send that data back to us in the first place. Unfortunately, even though this does limit the amount of useful data we'll be getting back, and possibly skew the data, there's not much we can do about this; we don't collect feature usage data without the user's consent. Looks like we'll simply have to live with this. What if the user tries to buy the experiment? This is something that isn't really covered by the Lean Startup book; how do you support users who give you money for an experiment? If the experiment is a new feature, and the user buys a license for SmartAssembly based on that feature, then what do we do if we later decide to pivot & scrap that feature? We've either got to spend time and money bringing that feature up to production quality and into the mainline anyway, or we've got disgruntled customers. Either way is bad. Again, there's not really any good solution to this. Similarly, what if we've removed some features for an experiment and a potential new user downloads the experimental build? (As I said above, there's no indication the build is an experimental build, as we want to see what users really do with it). The crucial feature they need is missing, causing a bad trial experience, a lost potential customer, and a lost chance to help the customer with their problem. Again, this is something not really covered by the Lean Startup book, and something that doesn't have a good solution. So, some tricky issues there, not all of them with nice easy answers. Turns out the practicalities of running Lean Startup experiments are more complicated than they first seem!

    Read the article

  • No sound lenovo t60 alsa ad1981 iec958

    - by Nate
    Any help on getting the sound to come through my lenovo t60 build in speakers, headphones, or mic would greatly be appreciated. The three buttons to increase, decrease sound seem to work. Bios has sound card enabled and the buttons beep when pressed. When going to Utube or playing music, no sound is heard. Thanks Nate Feb 23 - Didn't see anything specific in the sys logs with Rhythmbox when connecting my ipod. Rhythmbox is playing, but still no sound. Here is the syslog details for today. Output is set to analog output. Feb 23 17:42:32 itgis01398 rsyslogd: [origin software="rsyslogd" swVersion="4.2.0" x-pid="824" x-info="http://www.rsyslog.com"] rsyslogd was HUPed, type 'lightweight'. Feb 23 17:42:33 itgis01398 rsyslogd: [origin software="rsyslogd" swVersion="4.2.0" x-pid="824" x-info="http://www.rsyslog.com"] rsyslogd was HUPed, type 'lightweight'. Feb 23 17:42:49 itgis01398 anacron[968]: Job `cron.daily' terminated Feb 23 17:42:49 itgis01398 anacron[968]: Job `cron.weekly' started Feb 23 17:42:49 itgis01398 anacron[12067]: Updated timestamp for job `cron.weekly' to 2011-02-23 Feb 23 17:42:53 itgis01398 anacron[968]: Job `cron.weekly' terminated Feb 23 17:42:53 itgis01398 anacron[968]: Normal exit (2 jobs run) Feb 23 18:01:19 itgis01398 kernel: [ 2731.324067] usb 1-5: new high speed USB device using ehci_hcd and address 3 Feb 23 18:01:19 itgis01398 kernel: [ 2731.482879] Initializing USB Mass Storage driver... Feb 23 18:01:19 itgis01398 kernel: [ 2731.483061] usb-storage 1-5:1.0: Quirks match for vid 05ac pid 1205: 10 Feb 23 18:01:19 itgis01398 kernel: [ 2731.483116] scsi6 : usb-storage 1-5:1.0 Feb 23 18:01:19 itgis01398 kernel: [ 2731.483306] usbcore: registered new interface driver usb-storage Feb 23 18:01:19 itgis01398 kernel: [ 2731.483310] USB Mass Storage support registered. Feb 23 18:01:20 itgis01398 kernel: [ 2732.481116] scsi 6:0:0:0: Direct-Access Apple iPod 1.62 PQ: 0 ANSI: 0 Feb 23 18:01:20 itgis01398 kernel: [ 2732.482466] sd 6:0:0:0: Attached scsi generic sg2 type 0 Feb 23 18:01:20 itgis01398 kernel: [ 2732.485095] sd 6:0:0:0: [sdb] Adjusting the sector count from its reported value: 7999488 Feb 23 18:01:20 itgis01398 kernel: [ 2732.485110] sd 6:0:0:0: [sdb] 7999487 512-byte logical blocks: (4.09 GB/3.81 GiB) Feb 23 18:01:20 itgis01398 kernel: [ 2732.487933] sd 6:0:0:0: [sdb] Write Protect is off Feb 23 18:01:20 itgis01398 kernel: [ 2732.487941] sd 6:0:0:0: [sdb] Mode Sense: 64 00 00 08 Feb 23 18:01:20 itgis01398 kernel: [ 2732.487947] sd 6:0:0:0: [sdb] Assuming drive cache: write through Feb 23 18:01:20 itgis01398 kernel: [ 2732.489927] sd 6:0:0:0: [sdb] Adjusting the sector count from its reported value: 7999488 Feb 23 18:01:20 itgis01398 kernel: [ 2732.491150] sd 6:0:0:0: [sdb] Assuming drive cache: write through Feb 23 18:01:20 itgis01398 kernel: [ 2732.491163] sdb: sdb1 sdb2 Feb 23 18:01:20 itgis01398 kernel: [ 2732.510428] sd 6:0:0:0: [sdb] Adjusting the sector count from its reported value: 7999488 Feb 23 18:01:20 itgis01398 kernel: [ 2732.511288] sd 6:0:0:0: [sdb] Assuming drive cache: write through Feb 23 18:01:20 itgis01398 kernel: [ 2732.511297] sd 6:0:0:0: [sdb] Attached SCSI removable disk Feb 23 18:01:21 itgis01398 kernel: [ 2733.746675] FAT: invalid media value (0x2f) Feb 23 18:01:21 itgis01398 kernel: [ 2733.746682] VFS: Can't find a valid FAT filesystem on dev sdb1. Feb 23 18:01:22 itgis01398 upstart-udev-bridge[330]: Env must be KEY=VALUE pairs Feb 23 18:02:07 itgis01398 kernel: [ 2780.115826] sd 6:0:0:0: [sdb] Unhandled sense code Feb 23 18:02:07 itgis01398 kernel: [ 2780.115835] sd 6:0:0:0: [sdb] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Feb 23 18:02:07 itgis01398 kernel: [ 2780.115844] sd 6:0:0:0: [sdb] Sense Key : Medium Error [current] Feb 23 18:02:07 itgis01398 kernel: [ 2780.115855] Info fld=0x0 Feb 23 18:02:07 itgis01398 kernel: [ 2780.115859] sd 6:0:0:0: [sdb] Add. Sense: Unrecovered read error Feb 23 18:02:07 itgis01398 kernel: [ 2780.115870] sd 6:0:0:0: [sdb] CDB: Read(10): 28 00 00 08 fd e9 00 00 f0 00 Feb 23 18:02:07 itgis01398 kernel: [ 2780.115892] end_request: I/O error, dev sdb, sector 589289 Feb 23 18:02:49 itgis01398 kernel: [ 2821.351464] sd 6:0:0:0: [sdb] Unhandled sense code Feb 23 18:02:49 itgis01398 kernel: [ 2821.351473] sd 6:0:0:0: [sdb] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Feb 23 18:02:49 itgis01398 kernel: [ 2821.351482] sd 6:0:0:0: [sdb] Sense Key : Medium Error [current] Feb 23 18:02:49 itgis01398 kernel: [ 2821.351493] Info fld=0x0 Feb 23 18:02:49 itgis01398 kernel: [ 2821.351497] sd 6:0:0:0: [sdb] Add. Sense: No additional sense information Feb 23 18:02:49 itgis01398 kernel: [ 2821.351507] sd 6:0:0:0: [sdb] CDB: Read(10): 28 00 00 08 fe d9 00 00 10 00 Feb 23 18:02:49 itgis01398 kernel: [ 2821.351530] end_request: I/O error, dev sdb, sector 589529 Feb 23 18:17:01 itgis01398 CRON[12709]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) volume is all of the way up.

    Read the article

  • FAQ: GridView Calculation with JavaScript - Editable Price Field

    - by Vincent Maverick Durano
    Recently I wrote a series of blog posts that demonstrates how to do calculation in GridView using JavaScripts. You can check the series of posts below: FAQ: GridView Calculation with JavaScript FAQ: GridView Calculation with JavaScript - Formatting and Validation FAQ: GridView Calculation with JavaScript - Displaying Quantity Total Recently a user in the forums is asking how to calculate the total quantity, sub-totals and total amout in GridView  when a user enters the price and quantity in the TextBox field. Obviously the series of post  that I wrote will not work in this case because the price field in those examples are Label (read-only) and not TextBox fields. In this post I'm going to demonstrate how to accomplish this using the same method used in my previous examples. Basically I'm just going to modify the GridView declaration and replace the Label price field with a TextBox so that users can type on it. And finally modify the CalculateTotals() javascript function. Here are the code blocks below: <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title></title> <script type="text/javascript"> function CalculateTotals() { var gv = document.getElementById("<%= GridView1.ClientID %>"); var tb = gv.getElementsByTagName("input"); var lb = gv.getElementsByTagName("span"); var sub = 0; var total = 0; var indexQ = 1; var indexP = 0; var price = 0; var qty = 0; var totalQty = 0; var tbCount = tb.length / 2; for (var i = 0; i < tbCount; i++) { if (tb[i].type == "text") { ValidateNumber(tb[i + indexQ]); sub = parseFloat(tb[i + indexP].value) * parseFloat(tb[i + indexQ].value); if (isNaN(sub)) { lb[i].innerHTML = "0.00"; sub = 0; } else { lb[i].innerHTML = FormatToMoney(sub, "$", ",", "."); ; } if (isNaN(tb[i + indexQ].value) || tb[i + indexQ].value == "") { qty = 0; } else { qty = tb[i + indexQ].value; } totalQty += parseInt(qty); total += parseFloat(sub); indexQ++; indexP++; } } lb[lb.length - 2].innerHTML = totalQty; lb[lb.length -1].innerHTML = FormatToMoney(total, "$", ",", "."); } function ValidateNumber(o) { if (o.value.length > 0) { o.value = o.value.replace(/[^\d]+/g, ''); //Allow only whole numbers } } function isThousands(position) { if (Math.floor(position / 3) * 3 == position) return true; return false; }; function FormatToMoney(theNumber, theCurrency, theThousands, theDecimal) { var theDecimalDigits = Math.round((theNumber * 100) - (Math.floor(theNumber) * 100)); theDecimalDigits = "" + (theDecimalDigits + "0").substring(0, 2); theNumber = "" + Math.floor(theNumber); var theOutput = theCurrency; for (x = 0; x < theNumber.length; x++) { theOutput += theNumber.substring(x, x + 1); if (isThousands(theNumber.length - x - 1) && (theNumber.length - x - 1 != 0)) { theOutput += theThousands; }; }; theOutput += theDecimal + theDecimalDigits; return theOutput; } </script> </head> <body> <form id="form1" runat="server"> <asp:gridview ID="GridView1" runat="server" ShowFooter="true" AutoGenerateColumns="false"> <Columns> <asp:BoundField DataField="RowNumber" HeaderText="Row Number" /> <asp:BoundField DataField="Description" HeaderText="Item Description" /> <asp:TemplateField HeaderText="Item Price"> <ItemTemplate> <asp:TextBox ID="TXTPrice" runat="server" onkeyup="CalculateTotals();"></asp:TextBox> </ItemTemplate> <FooterTemplate> <b>Total Qty:</b> </FooterTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="Quantity"> <ItemTemplate> <asp:TextBox ID="TXTQty" runat="server" onkeyup="CalculateTotals();"></asp:TextBox> </ItemTemplate> <FooterTemplate> <asp:Label ID="LBLQtyTotal" runat="server" Font-Bold="true" ForeColor="Blue" Text="0" ></asp:Label>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <b>Total Amount:</b> </FooterTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="Sub-Total"> <ItemTemplate> <asp:Label ID="LBLSubTotal" runat="server" ForeColor="Green" Text="0.00"></asp:Label> </ItemTemplate> <FooterTemplate> <asp:Label ID="LBLTotal" runat="server" ForeColor="Green" Font-Bold="true" Text="0.00"></asp:Label> </FooterTemplate> </asp:TemplateField> </Columns> </asp:gridview> </form> </body> </html>   That's it! I hope someone find this post useful! Technorati Tags: ASP.NET,GridView,JavaScript

    Read the article

  • Some OBI EE Tricks and Tips in the Admin Tool By Gerry Langton

    - by hamsun
    How to set the log level from a Session variable Initialization block As we know it is normal to set the log level non-zero for a particular user when we wish to debug problems. However sometimes it is inconvenient to go into each user’s properties in the Admin tool and update the log level. So I am showing a method which allows the log level to be set for all users via a session initialization block. This is particularly useful for anyone wanting an alternative way to set the log level. The screen shots shown are using the OBIEE 11g SampleApp demo but are applicable to any environment. Open the appropriate rpd in on-line mode and navigate to Manage Variables. Select Session Initialization Blocks, right click in the white space and create a New Initialization Block. I called the Initialization block Set_Loglevel . Now click on ‘Edit Data Source’ to enter the SQL. Chose the ‘Use OBI EE Server’ option for the SQL. This means that the SQL provided must use tables which have been defined in the Physical layer of the RPD, and whilst there is no need to provide a connection pool you must work in On-Line mode. The SQL can access any of the RPD tables and is purely used to return a value of 2. The ‘Test’ button confirms that the SQL is valid. Next, click on the ‘Edit Data Target’ button to add the LOGLEVEL variable to the initialization block. Check the ‘Enable any user to set the value’ option so that this will work for any user. Click OK and the following message will display as LOGLEVEL is a system session variable: Click ‘Yes’. Click ‘OK’ to save the Initialization block. Then check in the On-LIne changes. To test that LOGLEVEL has been set, log in to OBIEE using an administrative login (e.g. weblogic) and reload server metadata, either from the Analysis editor or from Administration > Reload Files and Metadata link. Run a query then navigate to Administration > Manage Sessions and click ‘View Log’ for the query just issued (which should be approximately the last in the list). A log file should exist and with LOGLEVEL set to 2 should include both logical and physical sql. If more diagnostic information is required then set LOGLEVEL to a higher value. If logging is required only for a particular analysis then an alternative method can be used directly from the Analysis editor. Edit the analysis for which debugging is required and click on the Advanced tab. Scroll down to the Advanced SQL clauses section and enter the following in the Prefix box: SET VARIABLE LOGLEVEL = 2; Click the ‘Apply SQL’ button. The SET VARIABLE statement will now prefix the Analysis’s logical SQL. So that any time this analysis is run it will produce a log. You can find information about training for Oracle BI EE products here or in the OU Learning Paths. Please send me an email at [email protected] if you have any further questions. About the Author: Gerry Langton started at Siebel Systems in 1999 working as a technical instructor teaching both Siebel application development and also Siebel Analytics (which subsequently became Oracle BI EE). From 2006 Gerry has worked as Senior Principal Instructor within Oracle University specialising in Oracle BI EE, Oracle BI Publisher and Oracle Data Warehouse development for BI.

    Read the article

  • Launch 7:Windows Phone 7 Style Live Tiles On Android Mobiles

    - by Gopinath
    Android is a great mobile OS but one thought that lingers in the mind of few Android owners is: Am I using a cheap iPhone? This is valid thought for many low end Android users as their phones runs sluggish and the user interface of Android looks like an imitation of iOS. When it comes to Windows Phone 7 users, even though their operating system features are not as great as iPhone/Android but it has its unique user interface; Windows Phone 7 user interface is a very intuitive and fresh, it’s constantly updating Live Tiles show all the required information on the home screen. Android has best mobile operating system features except UI and Windows Phone 7 has excellent user interface. How about porting Windows Phone 7 Tiles interface on an Android? That should be great. Launch 7 app brings the best of Windows Phone 7 look and feel to Android OS. Once the Launcher 7 app is installed and activated, it brings Live Tiles or constantly updating controls that show information on Android home screen. Apart from simple and smooth tiles, there are handful of customization options provided. Users can change colour of the tiles, add new tiles, enable/disable transitions. The reviews on Android Market are on the positive side with 4.4 stars by 10,000 + reviewers. Here are few user reviews 1. Does what it says. only issue for me is that the app drawer doesn’t rotate. And I would like the UI to rotate when my KB is opened. HTC desire z – Jonathan 2. Works great on atrix.Kudos to developers. Awesome. Though needs: Better notification bar More stock images of tiles Better fitting of widgets on tiles – Manny 3. Looks really good like it much more than I thought I would runs real smooth running royal ginger 2.1 – Jay 4. Omg amazing i am definetly keeping it as my default best of android and windows – Devon 5. Man! An update every week! Very very responsive developer! – Andrew You can read more reviews on Android Market here.  There is no doubt that this application is receiving rave reviews. After scanning a while through the reviews, few complaints throw light on the negative side: Battery drains a bit faster & Low end mobile run a bit sluggish. The application is available in two versions – an ad supported free version and $1.41 ad free version. Download Launcher 7 from Android Market This article titled,Launch 7:Windows Phone 7 Style Live Tiles On Android Mobiles, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Developer Training – 6 Online Courses to Learn SQL Server, MySQL and Technology

    - by Pinal Dave
    Video courses are the next big thing and I am so happy that I have so far authored 6 different video courses with Pluralsight. Here is the list of the courses. I have listed all of my video courses over here. Note: If you click on the courses and it does not open, you need to login to Pluralsight with a valid username and password or sign up for a FREE trial. Please leave a comment with your favorite course in the comment section. Random 10 winners will get surprise gift via email. Bonus: If you list your favorite module from the course site. SQL Server Performance: Introduction to Query Tuning SQL Server performance tuning is an in-depth topic, and an art to master. A key component of overall application performance tuning is query tuning. Writing queries in an efficient manner, and making sure they execute in the most optimal way possible, is always a challenge. The basics revolve around the details of how SQL Server carries out query execution, so the optimizations explored in this course follow along the same lines. Click to View Course SQL Server Performance: Indexing Basics Indexes are the most crucial objects of the database. They are the first stop for any DBA and Developer when it is about performance tuning. There is a good side as well evil side of the indexes. To master the art of performance tuning one has to understand the fundamentals of the indexes and the best practices associated with the same. This course is for every DBA and Developer who deals with performance tuning and wants to use indexes to improve the performance of the server. Click to View Course SQL Server Questions and Answers This course is designed to help you better understand how to use SQL Server effectively. The course presents many of the common misconceptions about SQL Server, and then carefully debunks those misconceptions with clear explanations and short but compelling demos, showing you how SQL Server really works. This course is for anyone working with SQL Server databases who wants to improve her knowledge and understanding of this complex platform. Click to View Course MySQL Fundamentals MySQL is a popular choice of database for use in web applications, and is a central component of the widely used LAMP open source web application software stack. This course covers the fundamentals of MySQL, including how to install MySQL as well as written basic data retrieval and data modification queries. Click to View Course Building a Successful Blog Expressing yourself is the most common behavior of humans. Blogging has made easy to express yourself. Just like a letter or book has a structure and formula, blogging also has structure and formula. In this introductory course on blogging we will go over a few of the basics of blogging and show the way to get started with blogging immediately. If you already have a blog, this course will be even more relevant as this will discuss many of the common questions and issue you face in your blogging routine. Click to View Course Introduction to ColdFusion ColdFusion is rapid web application development platform. In this course you will learn the basics of how to use ColdFusion platform and rapidly develop web sites. The course begins with learning basics of ColdFusion Markup Language and moves to common development language practices. From there we move to frequent database operations and advanced concepts of Forms, Sessions and Cookies. The last module sums up all the concepts covered in the course with sample application. Click to View Course Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, T SQL, Technology

    Read the article

  • Cellbi Silverlight Controls Giveaway (5 License to give away)

    - by mbcrump
    Cellbi recently updated their new Silverlight Controls to version 4 and to support Visual Studio 2010. I played with a couple of demos on their site and had to take a look. I headed over to their website and downloaded the controls. The first thing that I noticed was all of the special text effects and animations included. I emailed them asking if I could give away their controls in my January 2011 giveaway and they said yes. They also volunteered to give away 5 total license so the changes for you to win would increase.  I am very thankful they were willing to help the Silverlight community with this giveaway. So some quick rules below: ----------------------------------------------------------------------------------------------------------------------------------------------------------- Win a FREE developer’s license of Cellbi Silverlight Controls! (5 License to give away) Random winner will be announced on February 1st, 2011! To be entered into the contest do the following things: Subscribe to my feed. Leave a comment below with a valid email account (I WILL NOT share this info with anyone.) Retweet the following : I just entered to win free #Silverlight controls from @mbcrump and @cellbi http://mcrump.me/cscfree ! Don’t change the URL because this will allow me to track the users that Tweet this page. Don’t forget to visit Cellbi because they made this possible. ---------------------------------------------------------------------------------------------------------------------------------------------------------- Before we get started with the Silverlight Controls, here is a couple of links to bookmark: The What's new in this release page is here. You can also check out the live demos here. Don’t worry about the Samples/Help Documentation. That is installed to your local HDD during the installation process. Begin by downloading the trial version and running the program. After everything is installed then you will see the following screen: After it is installed, you may want to take a look at your Toolbox in Visual Studio 2010. After you add the controls from the “Choose Items” in Silverlight and you will see that you now have access to all of these controls. At this point, to use the controls it’s as simple as drag/drop onto your Silverlight container. It will create the proper Namespaces for you. It’s hard to show with a static screenshot just how powerful the controls actually are so I will refer you to the demo page to learn more about them. Since all of these are animations/effects it just doesn’t work with a static screenshot. It is worth noting that the Sfx pack really focuses on the following core effects: I will show you the best route to get started building a new project with them below. The best page to start is the sample browser which you can access by going to SvFx Launcher. In my case, I want to build a new Carousel. I simple navigate to the Carousel that I want to build and hit the “Cs” code at the top. This launches Visual Studio 2010 and now I can copy/paste the XAML into my project. That is all there is to it. Hopefully this post was helpful and don’t forget to leave a comment below in order to win a set of the controls!  Subscribe to my feed

    Read the article

  • Do you want to be an ALM Consultant?

    - by Martin Hinshelwood
    Northwest Cadence is looking for our next great consultant! At Northwest Cadence, we have created a work environment that emphasizes excellence, integrity, and out-of-the-box thinking.  Our customers have high expectations (rightfully so) and we wouldn’t have it any other way!   Northwest Cadence has some of the most exciting customers I have ever worked with and even though I have only been here just over a month I have already: Provided training/consulting for 3 government departments Created and taught courseware for delivering Scrum to teams within a high profile multinational company Started presenting Microsoft's ALM Engagement Program  So if you are interested in helping companies build better software more efficiently, then.. Enquire at [email protected] Application Lifecycle Management (ALM) Consultant An ALM Consultant with a minimum of 8 years of relevant experience with Application Lifecycle Management, Visual Studio (including Visual Studio Team System) and software design is needed. Must provide thought leadership on best practices for enterprise architecture, understand the Microsoft technology solution stack, and have a thorough understanding of enterprise application integration. The ALM Practice Lead will play a central role in designing and implementing the overall ALM Practice strategy, including creating, updating, and delivering ALM courseware and consultancy engagements. This person will also provide project support, deliverables, and quality solutions on Visual Studio Team System that exceed client expectations. Engagements will vary and will involve providing expert training, consulting, mentoring, formulating technical strategies and policies and acting as a “trusted advisor” to customers and internal teams. Sound sense of business and technical strategy required. Strong interpersonal skills as well as solid strategic thinking are key. The ideal candidate will be capable of envisioning the solution based on the early client requirements, communicating the vision to both technical and business stakeholders, leading teams through implementation, as well as training, mentoring, and hands-on software development. The ideal candidate will demonstrate successful use of both agile and formal software development methods, enterprise application patterns, and effective leadership on prior projects. Job Requirements Minimum Education: Bachelor’s Degree (computer science, engineering, or math preferred). Locale / Travel: The Practice Lead position requires estimated 50% travel, most of which will be in the Continental US (a valid national Passport must be maintained).  This is a full time position and will be based in the Kirkland office. Preferred Education: Master’s Degree in Information Technology or Software Engineering; Premium Microsoft Certifications on .NET (MCSD) or MCPD or relevant experience; Microsoft Certified Trainer (MCT) or relevant experience. Minimum Experience and Skills: 7+ years experience with business information systems integration or custom business application design and development in a professional technology consulting, corporate MIS or software development environment. Essential Duties & Responsibilities: Provide training, consulting, and mentoring to organizations on topics that include Visual Studio Team System and ALM. Create content, including labs and demonstrations, to be delivered as training classes by Northwest Cadence employees. Lead development teams through the complete ALM and/or Visual Studio Team System solution. Be able to communicate in detail how a solution will integrate into the larger technical problem space for large, complex enterprises. Define technical solution requirements. Provide guidance to the customer and project team with respect to technical feasibility, complexity, and level of effort required to deliver a custom solution. Ensure that the solution is designed, developed and deployed in accordance with the agreed upon development work plan. Create and deliver weekly status reports of training and/or consulting progress. Engagement Responsibilities: · Provide a strong desire to provide thought leadership related to technology and to help grow the business. · Work effectively and professionally with employees at all levels of a customer’s organization. · Have strong verbal and written communication skills. · Have effective presentation, organizational and planning skills. · Have effective interpersonal skills and ability to work in a team environment. Enquire at [email protected]

    Read the article

  • Isis Finally Rolls Out

    - by David Dorf
    Google has rolled their wallet out for several chains; I see the NFC readers in Walgreen's when I'm sent their for milk.  But Isis has been relatively quiet until now.  As of last week they have finally launched in their two test cities: Austin, and Salt Lake City.  Below are the supported carriers and phones as of now, but more phones will be added later. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} AT&T supports: HTC One™ X, LG Escape™, Samsung Galaxy Exhilarate™, Samsung Galaxy S® III, Samsung Galaxy Rugby Pro™ T-Mobile supports: Samsung Galaxy S® II, Samsung Galaxy S® III, Samsung Galaxy S® Relay 4G Verizon supports: Droid Incredible 4G LTE. Of course iPhone owners have no wallet since Apple didn't included an NFC chip. To start using Isis, you have to take your NFC-capable phone to your carrier's store to get the SIM replaced with a more sophisticated one that has a secure element configured for Isis.  The "secure element" is the cryptographic logic that secures mobile payments.  Carriers like the secure element in the SIM while non-carriers (like Google) prefer the secure element in the phone's electronics. (I'm not entirely sure if you could support both Isis and Google Wallet on the same phone.  Anybody know?) Then you can download the Isis app from Google Play and load your cards.  Most credit cards are supported, and there's a process to verify the credit cards are valid.  Then you can select from the list of participating retailers to "follow."  Selecting a retailer allows that retailer to give you offers via the app. The app is well done and easy to use.  You can select a default payment type and also switch between them easily.  When the phone is tapped on the reader, there are two exchanges of information.  The payment information is transferred, and then the Isis "SmartTap" information which includes optional loyalty number and digital coupons.  Of course the value of mobile wallets comes from the ease of handling all three data types (i.e. payment, loyalty, offers). There are several advertisements for Isis running now, and my favorite is below.

    Read the article

  • Question on the implementation of my Entity System

    - by miguel.martin
    I am currently creating an Entity System, in C++, it is almost completed (I have all the code there, I just have to add a few things and test it). The only thing is, I can't figure out how to implement some features. This Entity System is based off a bit from the Artemis framework, however it is different. I'm not sure if I'll be able to type this out the way my head processing it. I'm going to basically ask whether I should do something over something else. Okay, now I'll give a little detail on my Entity System itself. Here are the basic classes that my Entity System uses to actually work: Entity - An Id (and some methods to add/remove/get/etc Components) Component - An empty abstract class ComponentManager - Manages ALL components for ALL entities within a Scene EntitySystem - Processes entities with specific components Aspect - The class that is used to help determine what Components an Entity must contain so a specific EntitySystem can process it EntitySystemManager - Manages all EntitySystems within a Scene EntityManager - Manages entities (i.e. holds all Entities, used to determine whether an Entity has been changed, enables/disables them, etc.) EntityFactory - Creates (and destroys) entities and assigns an ID to them Scene - Contains an EntityManager, EntityFactory, EntitySystemManager and ComponentManager. Has functions to update and initialise the scene. Now in order for an EntitySystem to efficiently know when to check if an Entity is valid for processing (so I can add it to a specific EntitySystem), it must recieve a message from the EntityManager (after a call of activate(Entity& e)). Similarly the EntityManager must know when an Entity is destroyed from the EntityFactory in the Scene, and also the ComponentManager must know when an Entity is created AND destroyed. I do have a Listener/Observer pattern implemented at the moment, but with this pattern I may remove a Listener (which is this case is dependent on the method being called). I mainly have this implemented for specific things related to a game, i.e. Teams, Tagging of entities, etc. So... I was thinking maybe I should call a private method (using friend classes) to send out when an Entity has been activated, deleted, etc. i.e. taken from my EntityFactory void EntityFactory::killEntity(Entity& e) { // if the entity doesn't exsist in the entity manager within the scene if(!getScene()->getEntityManager().doesExsist(e)) { return; // go back to the caller! (should throw an exception or something..) } // tell the ComponentManager and the EntityManager that we killed an Entity getScene()->getComponentManager().doOnEntityWillDie(e); getScene()->getEntityManager().doOnEntityWillDie(e); // notify the listners for(Mouth::iterator i = getMouth().begin(); i != getMouth().end(); ++i) { (*i)->onEntityWillDie(*this, e); } _idPool.addId(e.getId()); // add the ID to the pool delete &e; // delete the entity } As you can see on the lines where I am telling the ComponentManager and the EntityManager that an Entity will die, I am calling a method to make sure it handles it appropriately. Now I realise I could do this without calling it explicitly, with the help of that for loop notifying all listener objects connected to the EntityFactory's Mouth (an object used to tell listeners that there's an event), however is this a good idea (good design, or what)? I've gone over the PROS and CONS, I just can't decide what I want to do. Calling Explicitly: PROS Faster? Since these functions are explicitly called, they can't be "removed" CONS Not flexible Bad design? (friend functions) Calling through Listener objects (i.e. ComponentManager/EntityManager inherits from a EntityFactoryListener) PROS More Flexible? Better Design? CONS Slower? (virtual functions) Listeners can be removed, i.e. may be removed and not get called again during the program, which could cause in a crash. P.S. If you wish to view my current source code, I am hosting it on BitBucket.

    Read the article

  • Troubleshooting Website problems within the local network

    - by HaydnWVN
    Have an external website which opens fine on some PC's, yet seems to time out (or symptoms of timing out, but never actually does) on others. Seems to only affect (some) of our newer HP Pro 3305 MT Workstations. All of which are running Win7 32bit SP1 with all updates. Older PC's (Win7 32bit SP1 & WinXP) are unaffected. Using Google Chrome & Firefox makes no difference. Opening the website in IE9 Compatibility Mode has exactly the same symptoms. All PC's are on the same local network (Workgroup) using the same DNS server & gateway (inhouse) on the same internet connection, on the same subnet. There is no proxy server, no content filtering, no load balancing etc etc. Only group policy in effect (locally) is for Update scheduling. Local firewalls are all the same (Kaspersky WP4) and our external facing firewall has no IP specific settings. I have no control over the external website, traceroute shows the same destination on all PC's. It is a fairly popular website in our industry (Horticulture) and i'm not aware of any other people (even other sites within our sister companies) with the same problem. Update: Used Fiddler2 to monitor the HTTP request, seems its not getting fulfilled for some reason?! Request sent: GET http://www.rhs.org.uk/ HTTP/1.1 Host: www.rhs.org.uk Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.47 Safari/536.11 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-GB,en-US;q=0.8,en;q=0.6 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Log from Fiddler 2 of the request: This session is not yet complete. Press F5 to refresh when session is complete for updated statistics. Request Count: 1 Bytes Sent: 567 (headers:567; body:0) Bytes Received: 0 (headers:0; body:0) ACTUAL PERFORMANCE -------------- ClientConnected: 17:02:33.720 ClientBeginRequest: 17:02:39.118 GotRequestHeaders: 17:02:39.118 ClientDoneRequest: 17:02:39.118 Determine Gateway: 0ms DNS Lookup: 0ms TCP/IP Connect: 46ms HTTPS Handshake: 0ms ServerConnected: 17:02:39.165 FiddlerBeginRequest: 17:02:39.165 ServerGotRequest: 17:02:39.165 ServerBeginResponse: 00:00:00.000 GotResponseHeaders: 00:00:00.000 ServerDoneResponse: 00:00:00.000 ClientBeginResponse: 00:00:00.000 ClientDoneResponse: 00:00:00.000 RESPONSE BYTES (by Content-Type) -------------- ~headers~: 0 Log of a successful request from a working PC (done this morning, excuse the timestamps being different from above): Request Count: 1 Bytes Sent: 493 (headers:493; body:0) Bytes Received: 20,413 (headers:525; body:19,888) ACTUAL PERFORMANCE -------------- ClientConnected: 08:22:47.766 ClientBeginRequest: 08:22:47.766 GotRequestHeaders: 08:22:47.766 ClientDoneRequest: 08:22:47.766 Determine Gateway: 0ms DNS Lookup: 26ms TCP/IP Connect: 30ms HTTPS Handshake: 0ms ServerConnected: 08:22:47.828 FiddlerBeginRequest: 08:22:47.828 ServerGotRequest: 08:22:47.828 ServerBeginResponse: 08:22:48.905 GotResponseHeaders: 08:22:48.905 ServerDoneResponse: 08:22:48.905 ClientBeginResponse: 08:22:48.905 ClientDoneResponse: 08:22:48.905 Overall Elapsed: 00:00:01.1388020 RESPONSE BYTES (by Content-Type) -------------- text/html: 19,888 ~headers~: 525 So my question has evolved into: What is the difference between the 2 requests and how do I determine why 1 PC is not getting a reply to it's GET request?

    Read the article

  • Production Access Denied! Who caused this rule anyways?

    - by Matt Watson
    One of the biggest challenges for most developers is getting access to production servers. In smaller dev teams of less than about 5 people everyone usually has access. Then you hire developer #6, he messes something up in production... and now nobody has access. That is how it always starts in small dev teams. I think just about every rule of life there is gets created this way. One person messes it up for the rest of us. Rules are then put in place to try and prevent it from happening again.Breaking the rules is in our nature. In this example it is for good cause and a necessity to support our applications and troubleshoot problems as they arise. So how do developers typically break the rules? Some create their own method to collect log files off servers so they can see them. Expensive log management programs can collect log files, but log files alone are not enough. Centralizing where important errors are logged to is common. Some lucky developers are given production server access by the IT operations team out of necessity. Wait. That's not fair to all developers and knowingly breaks the company rule!  When customers complain or the system is down, the rules go out the window. Commonly lead developers get production access because they are ultimately responsible for supporting the application and may be the only person who knows how to fix it. The problem with only giving lead developers production access is it doesn't scale from a support standpoint. Those key employees become the go to people to help solve application problems, but they also become a bottleneck. They end up spending up to half of their time every day helping resolve application defects, performance problems, or whatever the fire of the day is. This actually the last thing you want your lead developers doing. They should be working on something more strategic like major enhancements to the product. Having production access can actually be a curse if you are the guy stuck hunting down log files all day. Application defects are good tasks for junior developers. They can usually handle figuring out simple application problems. But nothing is worse than being a junior developer who can't figure out those problems and the back log of them grows and grows. Some of them require production server access to verify a deployment was done correctly, verify config settings, view log files, or maybe just restart an application. Since the junior developers don't have access, they end up bugging the developers who do have access or they track down a system admin to help. It can take hours or days to see server information that would take seconds or minutes if they had access of their own. It is very frustrating to the developer trying to solve the problem, the system admin being forced to help, and most importantly your customers who are not happy about the situation. This process is terribly inefficient. Production database access is also important for solving application problems, but presents a lot of risk if developers are given access. They could see data they shouldn't.  They could write queries on accident to update data, delete data, or merely select every record from every table and bring your database to its knees. Since most of the application we create are data driven, it can be very difficult to track down application bugs without access to the production databases.Besides it being against the rule, why don't all developers have access? Most of the time it comes down to security, change of control, lack of training, and other valid reasons. Developers have been known to tinker with different settings to try and solve a problem and in the process forget what they changed and made the problem worse. So it is a double edge sword. Don't give them access and fixing bugs is more difficult, or give them access and risk having more bugs or major outages being created!Matt WatsonFounder, CEOStackifyAgile Support for Agile Developers

    Read the article

  • Booting the liveCD/USB in EFI mode fails on Samsung Tablet XE700T1A

    - by F.L.
    My tablet is Samsung Series 7 Slate (XE700T1A-A02FR (French Language)). It operates an Intel Sandy Bridge architecture. The main issue about this tablet is that it ships with an installed Windows 7 in (U)EFI mode (GPT partition table, etc.), so I'd like to get an EFI dual boot with Ubuntu. But it seems I can't boot on the liveCD in EFI mode. It starts loading (up to initrd), but I then get a blank (black) screen. I've tried the nomodeset kernel option (as well as removing quiet and splash) with no luck. [2012-09-27] I have used the Ubuntu 12.04.1 Desktop ISO (I have read somewhere that it is the only one that can boot in EFI mode). I'd say this has something to do with UEFI since the LiveCD boots in bios mode but not in efi mode. Besides, I am not sure my boot info will help, since I can't boot the LiveCD in EFI mode. As a result I can't install ubuntu in EFI mode. So it would be the boot info from the liveCD boot in bios mode. This happens on a ubuntu-12.04.1-desktop-amd64 iso used on a LiveUSB. Live USB was created by dd'ing the iso onto the full disk device (i.e. /dev/sdx no number) of the Flash drive. I have also tried copying the LiveCD files on a primary GPT partition, but with no luck, I just get the grub shell, no menu, no install option. [2012-09-28] I tried today a flash drive created with Ubuntu's Startup Disk Creator and the alternate 12.04.1 64 bit ISO. I get a grub menu in text mode (which meens it did start in efi mode) with install options / test options. But when I start any of these, I simply get a black screen (no cursor, neither mouse nor text-mode cursor). I tried removing the 'quiet' option and adding nomodeset and acpi=off, but it didn't do any good. So this is the same result as for the LiveCD. [2012-10-01] I have tried with a version of the secure remix version via usb-creator-gtk. The boot on the USB key has the same symptoms. Boot in EFI mode is impossible (I have menu but whatever entry I choose, I get the blank screen problem). The boot in BIOS mode works, I did the install. Then I used boot-repair to try installing grub-efi and get a system that would boot in efi mode. But I can't boot this system, because the EFI firmware doesn't seem to detect that sda contains a valid efi partition. Here is the resulting boot-info Boot info 1253554 [2012-10-01] Today, I have reinstalled the pre-shipped version of windows 7, and then installed ubuntu from a secure-remix iso dumped on USB flash drive vie usb-creator-gtk booted in BIOS mode. When install ended, I said "continue testing" then I used boot-repair to try get the bootloader installed. Now, when I boot the tablet, I get the grub menu, it can chainload windows 7 flawlessly. But when I try to start one of the ubuntu options I get the same old blank screen. Here is the new boot-info: Boot info 1253927 [2012-10-01] I tried installing the 3.3 kernel by chrooting a live usb boot (secure remix again) into the installed system. Same symptoms. I feel the key to this is that the device's efi firmware (which is EFI v2.0) would expose the graphics hardware in a way that prevents the kernel to initialize it, and thus prevents it from booting (the kernel stops all drive access just after the screen turns kind of very dark purple). Here is some info on the UEFI firmware as given by rEFInd: EFI revision: 2.00 Platform: x86_64 (64 bit) Firmware: American Megatrends 4.635 Screen Output: Graphics Output (UEFI), 800x600 [2012-10-08] This week end I tried loading the kernel with elilo. Eventhough I didn't have more luck on booting the kernel, elilo gives more info when loading the kernel. I think the next step is trying to load a kernel with EFI stub directly.

    Read the article

  • What are the disadvantages of self-encapsulation?

    - by Dave Jarvis
    Background Tony Hoare's billion dollar mistake was the invention of null. Subsequently, a lot of code has become riddled with null pointer exceptions (segfaults) when software developers try to use (dereference) uninitialized variables. In 1989, Wirfs-Brock and Wikerson wrote: Direct references to variables severely limit the ability of programmers to re?ne existing classes. The programming conventions described here structure the use of variables to promote reusable designs. We encourage users of all object-oriented languages to follow these conventions. Additionally, we strongly urge designers of object-oriented languages to consider the effects of unrestricted variable references on reusability. Problem A lot of software, especially in Java, but likely in C# and C++, often uses the following pattern: public class SomeClass { private String someAttribute; public SomeClass() { this.someAttribute = "Some Value"; } public void someMethod() { if( this.someAttribute.equals( "Some Value" ) ) { // do something... } } public void setAttribute( String s ) { this.someAttribute = s; } public String getAttribute() { return this.someAttribute; } } Sometimes a band-aid solution is used by checking for null throughout the code base: public void someMethod() { assert this.someAttribute != null; if( this.someAttribute.equals( "Some Value" ) ) { // do something... } } public void anotherMethod() { assert this.someAttribute != null; if( this.someAttribute.equals( "Some Default Value" ) ) { // do something... } } The band-aid does not always avoid the null pointer problem: a race condition exists. The race condition is mitigated using: public void anotherMethod() { String someAttribute = this.someAttribute; assert someAttribute != null; if( someAttribute.equals( "Some Default Value" ) ) { // do something... } } Yet that requires two statements (assignment to local copy and check for null) every time a class-scoped variable is used to ensure it is valid. Self-Encapsulation Ken Auer's Reusability Through Self-Encapsulation (Pattern Languages of Program Design, Addison Wesley, New York, pp. 505-516, 1994) advocated self-encapsulation combined with lazy initialization. The result, in Java, would resemble: public class SomeClass { private String someAttribute; public SomeClass() { setAttribute( "Some Value" ); } public void someMethod() { if( getAttribute().equals( "Some Value" ) ) { // do something... } } public void setAttribute( String s ) { this.someAttribute = s; } public String getAttribute() { String someAttribute = this.someAttribute; if( someAttribute == null ) { setAttribute( createDefaultValue() ); } return someAttribute; } protected String createDefaultValue() { return "Some Default Value"; } } All duplicate checks for null are superfluous: getAttribute() ensures the value is never null at a single location within the containing class. Efficiency arguments should be fairly moot -- modern compilers and virtual machines can inline the code when possible. As long as variables are never referenced directly, this also allows for proper application of the Open-Closed Principle. Question What are the disadvantages of self-encapsulation, if any? (Ideally, I would like to see references to studies that contrast the robustness of similarly complex systems that use and don't use self-encapsulation, as this strikes me as a fairly straightforward testable hypothesis.)

    Read the article

  • Getting Started with jqChart for ASP.NET Web Forms

    - by jqChart
    Official Site | Samples | Download | Documentation | Forum | Twitter Introduction jqChart takes advantages of HTML5 Canvas to deliver high performance client-side charts and graphs across browsers (IE 6+, Firefox, Chrome, Opera, Safari) and devices, including iOS and Android mobile devices. Some of the key features are: High performance rendering. Animaitons. Scrolling/Zoooming. Support for unlimited number of data series and data points. Support for unlimited number of chart axes. True DateTime Axis. Logarithmic and Reversed axis scale. Large set of chart types - Bar, Column, Pie, Line, Spline, Area, Scatter, Bubble, Radar, Polar. Financial Charts - Stock Chart and Candlestick Chart. The different chart types can be easily combined.  System Requirements Browser Support jqChart supports all major browsers: Internet Explorer - 6+ Firefox Google Chrome Opera Safari jQuery version support jQuery JavaScript framework is required. We recommend using the latest official stable version of the jQuery library. Visual Studio Support jqChart for ASP.NET does not require using Visual Studio. You can use your favourite code editor. Still, the product has been tested with several versions of Visual Studio .NET and you can find the list of supported versions below: Visual Studio 2008 Visual Studio 2010 Visual Studio 2012 ASP.NET Web Forms support Supported version - ASP.NET Web Forms 3.5, 4.0 and 4.5 Installation Download and unzip the contents of the archive to any convenient location. The package contains the following folders: [bin] - Contains the assembly DLLs of the product (JQChart.Web.dll) for WebForms 3.5, 4.0 and 4.5. This is the assembly that you can reference directly in your web project (or better yet, add it to your ToolBox and then drag & drop it from there). [js] - The javascript files of jqChart and jqRangeSlider (and the needed libraries). You need to include them in your ASPX page, in order to gain the client side functionality of the chart. The first file is "jquery-1.5.1.min.js" - this is the official jQuery library. jqChart is built upon jQuery library version 1.4.3. The second file you need is the "excanvas.js" javascript file. It is used from the versions of IE, which dosn't support canvas graphics. The third is the jqChart javascript code itself, located in "jquery.jqChart.min.js". The last one is the jqRangeSlider javascript, located in "jquery.jqRangeSlider.min.js". It is used when the chart zooming is enabled. [css] - Contains the Css files that the jqChart and the jqRangeSlider need. [samples] - Contains some examples that use the jqChart. For full list of samples plese visit - jqChart for ASP.NET Samples. [themes] - Contains the themes shipped with the products. It is used from the jqRangeSlider. Since jqRangeSlider supports jQuery UI Themeroller, any theme compatible with jQuery UI ThemeRoller will work for jqRangeSlider as well. You can download any additional themes directly from jQuery UI's ThemeRoller site available here: http://jqueryui.com/themeroller/ or reference them from Microsoft's / Google's CDN. <link rel="stylesheet" type="text/css" media="screen" href="http://ajax.aspnetcdn.com/ajax/jquery.ui/1.8.21/themes/smoothness/jquery-ui.css" /> The final result you will have in an ASPX page containing jqChart would be something similar to that (assuming you have copied the [js] to the Script folder and [css] to Content folder of your ASP.NET site respectively). <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="samples_cs.Default" %> <%@ Register Assembly="JQChart.Web" Namespace="JQChart.Web.UI.WebControls" TagPrefix="jqChart" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <head runat="server"> <title>jqChart ASP.NET Sample</title> <link rel="stylesheet" type="text/css" href="~/Content/jquery.jqChart.css" /> <link rel="stylesheet" type="text/css" href="~/Content/jquery.jqRangeSlider.css" /> <link rel="stylesheet" type="text/css" href="~/Content/themes/smoothness/jquery-ui-1.8.21.css" /> <script src="<% = ResolveUrl("~/Scripts/jquery-1.5.1.min.js") %>" type="text/javascript"></script> <script src="<% = ResolveUrl("~/Scripts/jquery.jqRangeSlider.min.js") %>" type="text/javascript"></script> <script src="<% = ResolveUrl("~/Scripts/jquery.jqChart.min.js") %>" type="text/javascript"></script> <!--[if IE]><script lang="javascript" type="text/javascript" src="<% = ResolveUrl("~/Scripts/excanvas.js") %>"></script><![endif]--> </head> <body> <form id="form1" runat="server"> <asp:ObjectDataSource ID="ObjectDataSource1" runat="server" SelectMethod="GetData" TypeName="SamplesBrowser.Models.ChartData"></asp:ObjectDataSource> <jqChart:Chart ID="Chart1" Width="500px" Height="300px" runat="server" DataSourceID="ObjectDataSource1"> <Title Text="Chart Title"></Title> <Animation Enabled="True" Duration="00:00:01" /> <Axes> <jqChart:CategoryAxis Location="Bottom" ZoomEnabled="true"> </jqChart:CategoryAxis> </Axes> <Series> <jqChart:ColumnSeries XValuesField="Label" YValuesField="Value1" Title="Column"> </jqChart:ColumnSeries> <jqChart:LineSeries XValuesField="Label" YValuesField="Value2" Title="Line"> </jqChart:LineSeries> </Series> </jqChart:Chart> </form> </body> </html>   Official Site | Samples | Download | Documentation | Forum | Twitter

    Read the article

  • Extrapolation breaks collision detection

    - by user22241
    Before applying extrapolation to my sprite's movement, my collision worked perfectly. However, after applying extrapolation to my sprite's movement (to smooth things out), the collision no longer works. This is how things worked before extrapolation: However, after I implement my extrapolation, the collision routine breaks. I am assuming this is because it is acting upon the new coordinate that has been produced by the extrapolation routine (which is situated in my render call ). After I apply my extrapolation How to correct this behaviour? I've tried puting an extra collision check just after extrapolation - this does seem to clear up a lot of the problems but I've ruled this out because putting logic into my rendering is out of the question. I've also tried making a copy of the spritesX position, extrapolating that and drawing using that rather than the original, thus leaving the original intact for the logic to pick up on - this seems a better option, but it still produces some weird effects when colliding with walls. I'm pretty sure this also isn't the correct way to deal with this. I've found a couple of similar questions on here but the answers haven't helped me. This is my extrapolation code: public void onDrawFrame(GL10 gl) { //Set/Re-set loop back to 0 to start counting again loops=0; while(System.currentTimeMillis() > nextGameTick && loops < maxFrameskip){ SceneManager.getInstance().getCurrentScene().updateLogic(); nextGameTick+=skipTicks; timeCorrection += (1000d/ticksPerSecond) % 1; nextGameTick+=timeCorrection; timeCorrection %=1; loops++; tics++; } extrapolation = (float)(System.currentTimeMillis() + skipTicks - nextGameTick) / (float)skipTicks; render(extrapolation); } Applying extrapolation render(float extrapolation){ //This example shows extrapolation for X axis only. Y position (spriteScreenY is assumed to be valid) extrapolatedPosX = spriteGridX+(SpriteXVelocity*dt)*extrapolation; spriteScreenPosX = extrapolationPosX * screenWidth; drawSprite(spriteScreenX, spriteScreenY); } Edit As I mentioned above, I have tried making a copy of the sprite's coordinates specifically to draw with.... this has it's own problems. Firstly, regardless of the copying, when the sprite is moving, it's super-smooth, when it stops, it's wobbling slightly left/right - as it's still extrapolating it's position based on the time. Is this normal behavior and can we 'turn it off' when the sprite stops? I've tried having flags for left / right and only extrapolating if either of these is enabled. I've also tried copying the last and current positions to see if there is any difference. However, as far as collision goes, these don't help. If the user is pressing say, the right button and the sprite is moving right, when it hits a wall, if the user continues to hold the right button down, the sprite will keep animating to the right, while being stopped by the wall (therefore not actually moving), however because the right flag is still set and also because the collision routine is constantly moving the sprite out of the wall, it still appear to the code (not the player) that the sprite is still moving, and therefore extrapolation continues. So what the player would see, is the sprite 'static' (yes, it's animating, but it's not actually moving across the screen), and every now and then it shakes violently as the extrapolation attempts to do it's thing....... Hope this help

    Read the article

  • Scenarios for Throwing Exceptions

    - by Joe Mayo
    I recently came across a situation where someone had an opinion that differed from mine of when an exception should be thrown. This particular case was an issue opened on LINQ to Twitter for an Exception on EndSession.  The premise of the issue was that the poster didn’t feel an exception should be raised, regardless of authentication status.  As first, this sounded like a valid point.  However, I went back to review my code and decided not to make any changes. Here's my rationale: 1. The exception doesn’t occur if the user is authenticated when EndAccountSession is called. 2. The exception does occur if the user is not authenticated when EndAccountSession is called. 3. The exception represents the fact that EndAccountSession is not able to fulfill its intended purpose - to end the session.  If a session never existed, then it would not be possible to perform the requested action.  Therefore, an exception is appropriate. To help illustrate how to handle this situation, I've modified the following code in Program.cs in the LinqToTwitterDemo project to illustrate the situation: static void EndSession(ITwitterAuthorizer auth) { using (var twitterCtx = new TwitterContext(auth, "https://api.twitter.com/1/", "https://search.twitter.com/")) { try { //Log twitterCtx.Log = Console.Out; var status = twitterCtx.EndAccountSession(); Console.WriteLine("Request: {0}, Error: {1}" , status.Request , status.Error); } catch (TwitterQueryException tqe) { var webEx = tqe.InnerException as WebException; if (webEx != null) { var webResp = webEx.Response as HttpWebResponse; if (webResp != null && webResp.StatusCode == HttpStatusCode.Unauthorized) Console.WriteLine("Twitter didn't recognize you as having been logged in. Therefore, your request to end session is illogical.\n"); } var status = tqe.Response; Console.WriteLine("Request: {0}, Error: {1}" , status.Request , status.Error); } } } As expected, LINQ to Twitter wraps the exception in a TwitterQueryException as the InnerException.  The TwitterQueryException serves a very useful purpose through it's Response property.  Notice in the example above that the response has Request and Error proprieties.  These properties correspond to the information that Twitter returns as part of it's response payload.  This is often useful while debugging to help you understand why Twitter was unable to perform the  requested action.  Other times, it's cryptic, but that's another story.  At least you have some way of knowing in your code how to anticipate and handle these situations, along with having extra information to debug with. To sum things up, there are two points to make: when and why an exception should be raised and when to wrap and re-throw an exception in a custom exception type. I felt it was necessary to allow the exception to be raised because the called method was unable to perform the task it was designed for.  I also felt that it is inappropriate for a general library to do anything with exceptions because that could potentially hide a problem from the caller.  A related point is that it should be the exclusive decision of the application that uses the library on what to do with an exception.  Another aspect of this situation is that I wrapped the exception in a custom exception and re-threw.  This is a tough call because I don’t want to hide any stack trace information.  However, the need to make the exception more meaningful by including vital information returned from Twitter swayed me in the direction to design an interface that was as helpful as possible to library consumers.  As shown in the code above, you can dig into the exception and pull out a lot of good information, such as the fact that the underlying HTTP response was a 401 Unauthorized.  In all, trade-offs are seldom perfect for all cases, but combining the fact that the method was unable to perform its intended function, this is a library, and the extra information can be more helpful, it seemed to be the better design. @JoeMayo

    Read the article

  • Setup and configure a MVC4 project for Cloud Service(web role) and SQL Azure

    - by MagnusKarlsson
    I aim at keeping this blog post updated and add related posts to it. Since there are a lot of these out there I link to others that has done kind of the same before me, kind of a blog-DRY pattern that I'm aiming for. I also keep all mistakes and misconceptions for others to see. As an example; if I hit a stacktrace I will google it if I don't directly figure out the reason for it. I will then probably take the most plausible result and try it out. If it fails because I misinterpreted the error I will not delete it from the log but keep it for future reference and for others to see. That way people that finds this blog can see multiple solutions for indexed stacktraces and I can better remember how to do stuff. To avoid my errors I recommend you to read through it all before going from start to finish.The steps:Setup project in VS2012. (msdn blog)Setup Azure Services (half of mpspartners.com blog)Setup connections strings and configuration files (msdn blog + notes)Export certificates.Create Azure package from vs2012 and deploy to staging (same steps as for production).Connections string error Set up the visual studio project:http://blogs.msdn.com/b/avkashchauhan/archive/2011/11/08/developing-asp-net-mvc4-based-windows-azure-web-role.aspx Then login in to Azure to setup the services:Stop following this guide at the "publish website" part since we'll be uploading a package.http://www.mpspartners.com/2012/09/ConfiguringandDeployinganMVC4applicationasaCloudServicewithAzureSQLandStorage/ When set up (connection strings for debug and release and all), follow this guide to set up the configuration files:http://msdn.microsoft.com/en-us/library/windowsazure/hh369931.aspxTrying to package our application at this step will generate the following warning:3>MvcWebRole1(0,0): warning WAT170: The configuration setting 'Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString' is set up to use the local storage emulator for role 'MvcWebRole1' in configuration file 'ServiceConfiguration.Cloud.cscfg'. To access Windows Azure storage services, you must provide a valid Windows Azure storage connection string. Right click the web role under roles in solution manager and choose properties. Choose "Service configuration: Cloud". At "specify storage account credentials" we will copy/paste our account name and key from the Azure management platform.3.1 4. Right click Remote desktop Configuration and select certificate and export to file. We need to allow it in Portal manager.4.15 Now right click the cloud project and select package.5.1 Showing dialogue box. 5.2 Package success Now copy the path to the packaged file and go to management portal again. Click your web role and choose staging (or production). Upload. 5.3Tick the box about the single instance if that's what you want or you don't know what it means. Otherwise the following will happen (see image 4.6)5.4 Dialogue box When you have clicked the symbol for accept- button you will see the following screen with some green indicators down at the right corner. Click them if you want to see status.5.5 Information screen.5.6 "Failed to deploy application. The upload application has at least one role with only one instance. We recommend that you deploy at least two instances per role to ensure high availability in case one of the instances becomes unavailable. "To fix, go to step 5.4If you forgot to (or just didn't know you were supposed to) export your certificates. The following error will occur. Side note, the following thread suggests. To prevent: "Enable Remote Desktop for all roles" when right-clicking BIAB and choosing "Package". But in my case it was the not so present certificates. I fund the solution here.http://social.msdn.microsoft.com/Forums/en-US/dotnetstocktradersampleapplication/thread/0e94c2b5-463f-4209-86b9-fc257e0678cd5.75.8 Success! 5.9 Nice URL n' all. (More on that at another blog post).6. If you try to login and getWhen this error occurs many web sites suggest this is because you need:http://nuget.org/packages/Microsoft.AspNet.Providers.LocalDBOr : http://nuget.org/packages/Microsoft.AspNet.ProvidersBut it can also be that you don't have the correct setup for converting connectionstrings between your web.config to your debug.web.config(or release.web.config, whichever your using).Run as suggested in the "ordinary project in your solution. Go to the management portal and click update.

    Read the article

< Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >