Search Results

Search found 9410 results on 377 pages for 'simulator difference'.

Page 285/377 | < Previous Page | 281 282 283 284 285 286 287 288 289 290 291 292  | Next Page >

  • Retrieving Custom Attributes Using Reflection

    - by Scott Dorman
    The .NET Framework allows you to easily add metadata to your classes by using attributes. These attributes can be ones that the .NET Framework already provides, of which there are over 300, or you can create your own. Using reflection, the ways to retrieve the custom attributes of a type are: System.Reflection.MemberInfo public abstract object[] GetCustomAttributes(bool inherit); public abstract object[] GetCustomAttributes(Type attributeType, bool inherit); public abstract bool IsDefined(Type attributeType, bool inherit); System.Attribute public static Attribute[] GetCustomAttributes(MemberInfo member, bool inherit); public static bool IsDefined(MemberInfo element, Type attributeType, bool inherit); If you take the following simple class hierarchy: public abstract class BaseClass { private bool result;   [DefaultValue(false)] public virtual bool SimpleProperty { get { return this.result; } set { this.result = value; } } }   public class DerivedClass : BaseClass { public override bool SimpleProperty { get { return true; } set { base.SimpleProperty = value; } } } Given a PropertyInfo object (which is derived from MemberInfo, and represents a propery in reflection), you might expect that these methods would return the same result. Unfortunately, that isn’t the case. The MemberInfo methods strictly reflect the metadata definitions, ignoring the inherit parameter and not searching the inheritance chain when used with a PropertyInfo, EventInfo, or ParameterInfo object. It also returns all custom attribute instances, including those that don’t inherit from System.Attribute. The Attribute methods are closer to the implied behavior of the language (and probably closer to what you would naturally expect). They do respect the inherit parameter for PropertyInfo, EventInfo, and ParameterInfo objects and search the implied inheritance chain defined by the associated methods (in this case, the property accessors). These methods also only return custom attributes that inherit from System.Attribute. This is a fairly subtle difference that can produce very unexpected results if you aren’t careful. For example, to retrieve the custom  attributes defined on SimpleProperty, you could use code similar to this: PropertyInfo info = typeof(DerivedClass).GetProperty("SimpleProperty"); var attributeList1 = info.GetCustomAttributes(typeof(DefaultValueAttribute), true)); var attributeList2 = Attribute.GetCustomAttributes(info, typeof(DefaultValueAttribute), true));   The attributeList1 array will be empty while the attributeList2 array will contain the attribute instance, as expected. Technorati Tags: Reflection,Custom Attributes,PropertyInfo

    Read the article

  • Oracle Tutor: *** CAUTION to Word .docx Users ***

    - by [email protected]
    Microsoft released a security update KB969604 for Office 2007 (around June 2009) This update causes document variables within Word docx files to be scrambled. This update might still be pushed out via Office 2007 updates DO NOT save files as docx using MS OFFICE 2007 until you apply the MS hotfix # 970942 available here If you are using Windows XP with Office 2003 or Office 2000 and have installed an older Office 2007 compatibility pack, documents saved as docx may also cause the scrambled document variables. Installing the 2007 compatibility pack published on 1/6/2010 (version 4) will prevent the document variables from becoming corrupt. Those on Windows 2000 may not be able to install the latest compatibility pack, or the compatibility pack may not function properly. This situation will hopefully be rectified in the coming months. What is a document variable? Document variables store data inside the document, invisible to the user. The Tutor software uses them when converting the document to HTML and when creating the flowchart, just to name a couple of uses. How will you know if a document's variables are scrambled? The difficulty in diagnosing the issue is that the symptoms can take myriad forms. There isn't a single error message or a single feature that one can point to and say, "test for the problem by doing this." The best clue about the error is seeing any kind of string in an error message that has garbage characters, question marks, xml code snippets, or just nonsense. Such as "Language ?????????????xlr;lwlerkjl could not be found." It is also possible to see the corrupted data in the footers of the Word docs. And, just because the footers look correct does not mean that the document variables are not corrupted. The corruption problem does not occur in every document variable in the document, just some of them. Often it is less than a quarter of them. What is the difference between docx files and doc files? Office 2007 uses Office Open XML formats with .docx and .docm filename extensions. - Docx is an Office Open XML word document. - Docm is a macro enabled Office Open XML document. This means the file structure behind the scenes is quite different from the binary file formats used prior to Office 2007 such as .doc, .dot, .xls, and .ppt. Solution Summary: For Windows XP and Word 2007: Install the hotfix, or save files as *.doc For Windows XP and Word 2000 and 2003: Install the latest compatibility pack or save files as *.doc For Windows 2000 with Word 2000 or 2003, do not use any compatibility pack, save files as *.doc Emily Chorba Principle Product Manager for Oracle Tutor

    Read the article

  • The value of money

    - by ambreesh
    A dictionary definition of money is "any circulating medium of exchange, including coins, paper money, anddemand deposits". If you ask an economist for a definition of money, you will be introduced to terms like M1, M2, M3, all of which denote tangible assets - currency, and anything that is liquid enough to be used as currency; checks, stamps and now mobile minutes being examples. The macroeconomic theory of money is fascinating - the effect of money supply on exchange rates and interest rates, the concept of the "money multiplier" (if I deposit $10 into a bank, the bank will likely loan $8 of it to someone else, who will then give it to someone else in exchange for goods and services, who will then likely deposit it again, which will result in the bank loaning it again and so on - making that $10 of money supply worth a lot more ($10+$8+$x+...)).  But all this depends on money supply - in other words, money that is printed by the mint. The Treasury Department spends a lot of time figuring out how much money to print, there is lot being written on QE2 now-a-days, which is intended to increase the money supply. Money is used to purchase goods and services, and yes it is saved too but that is so one can purchase goods and services later. Completely unrelated, there is a sea change occurring in the web world, dominated by, I believe, Facebook. With 500M active users and growing, FB has the ability to introduce a "money supply" which is completely unrelated to today's "money". Using today's money, a FB user can buy a certain number of FB$s, and then use the FB$s within FB to purchase goods and services - with the money multiplier kicking in. I remember talking with a colleague about this a few years ago, the true way to monetize the web is to introduce an alternative system to the existing, and FB has the ability to do just that. There is enough momentum, enough mass for FB to start to monetize its user base. And completely screw up the economists at the Treasury, not to mention disintermediating the banks completely. The only other ubiquitous asset is mobile minutes. People exchanging mobile minutes for tangible goods and services happens today, the big difference however is the demographic. While Safaricom offers this ability in Kenya today, FB has the 15-40 year middle class user as their user. And the next generation is growing up with FB as a standard channel for communicating with their peers. Virtual flowers when going in for the kill? If your target is an avid FB user, why not? It certainly is a lot more green - no pun intended!

    Read the article

  • Updates about Multidimensional vs Tabular #ssas #msbi

    - by Marco Russo (SQLBI)
    I recently read the blog post from James Serra Tabular model: Not ready for prime time? (read also the comments because there are discussions about a few points raised by James) and the following post from Christian Wade Multidimensional or Tabular. In the last 2 years I worked with many companies adopting Tabular in different scenarios and I agree with some of the points expressed by James in his post (especially about missing features in Tabular if compared to Multidimensional), but I strongly disagree in others. In general, Tabular is a good choice for a new project when: the development team does not have a good knowledge of Multidimensional and MDX (DAX is faster to learn, not so easy as it is sold by MS, but definitely easier than MDX) you don’t need calculations based on hierarchies (common in certain financial applications, but not so common as it could seem) there are important calculations based on distinct count measures there are complex calculations based on many-to-many relationships Until now, I never suggested to migrate an existing Multidimensional model to a Tabular one. There should be very important reasons for that, such as performance issues in distinct count and many-to-many relationships that cannot be easily solved by optimizing the Multidimensional model, but I still never encountered this scenario. I would say that in 80% of the new projects, you might use either Multidimensional or Tabular and the real difference is the time-to-market depending on the skills of the development team. So it’s not strange that who is used to Multidimensional is not moving to Tabular, not getting a particular benefit from the new model unless specific requirements exist. The recent DAXMD feature that allows using SharePoint Power View on Multidimensional is a really important one, even if I’d like having also Excel Power View enabled for this scenario (this should be just a question of time). Another scenario in which I’m seeing a growing adoption of Tabular is in companies that creates models for their product/service and do that by using XMLA or Tabular AMO 2012. I am used to call them ISVs, even if those providing services cannot be really defined in this way. These companies are facing the multitenancy challenge with Tabular and even if this is a niche market, I see some potential here, because adopting Tabular seems a much more natural choice than Multidimensional in those scenario where an analytical engine has to be embedded to deliver one of the features of a larger product/service delivered to customers. I’d like to see other feedbacks in the comments: tell your story of choosing between Tabular and Multidimensional in a BI project you started with SQL Server 2012, thanks!

    Read the article

  • Remote Task Flow vs. WSRP Portlets

    - by Frank Nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A remote task flow is bounded task flow that is deployed as a stand-alone Java EE application on a remote server with its URL Invoke property set to url-invoke-allowed. The remote task flow is accessed either from a direct browser GET request or, when called from another ADF application, through the task flow call activity. For more information about how to invoke remote task flows from a task flow call activity see chapter 15.6.4 How to Call a Bounded Task Flow Using a URL of the Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework at http://docs.oracle.com/cd/E23943_01/web.1111/b31974/taskflows_activities.htm#CHDJDJEF Compared to WRSP portlets, remote task flows in Oracle JDeveloper 11g R1 and R2 have a functional limitation in that they cannot be embedded as a region on a page but require the calling ADF application to navigate off to another application and page. The difference between a remote task flow call using the task flow call activity and a simple redirect to a remote Java EE application is that the remote task flow has a state token attached that allows to restore the state of the calling application upon task flow return. A use case for a remote task flow call activity is a "yellow page lookup" scenario in which different ADF applications use an remote task flow to lookup people, products or similar to return a selected value to the calling application. Note that remote task flow calls need to be performed from a bounded or unbounded top level task flow of the calling application. If called from a region (using the parent call activity) in a page, the region state is not recovered upon task flow return. ADF developers recently have identified remote task flows as an architecture pattern to partition their ADF applications into independently deployed Java EE applications. While this sounds like a desirable use of the remote task flow feature, it is not possible to achieve for as long as remote task flows don't render as an ADF region.

    Read the article

  • Help with this optimization

    - by Milo
    Here is what I do: I have bitmaps which I draw into another bitmap. The coordinates are from the center of the bitmap, thus on a 256 by 256 bitmap, an object at 0.0,0.0 would be drawn at 128,128 on the bitmap. I also found the furthest extent and made the bitmap size 2 times the extent. So if the furthest extent is 200,200 pixels, then the bitmap's size is 400,400. Unfortunately this is a bit inefficient. If a bitmap needs to be drawn at 500,500 and the other one at 300,300, then the target bitmap only needs to be 200,200 in size. I cannot seem to find a correct way to draw in the components correctly with a reduced size. I figure out the target bitmap size like this: float AvatarComposite::getFloatWidth(float& remainder) const { float widest = 0.0f; float widestNeg = 0.0f; for(size_t i = 0; i < m_components.size(); ++i) { if(m_components[i].getSprite() == NULL) { continue; } float w = m_components[i].getX() + ( ((m_components[i].getSprite()->getWidth() / 2.0f) * m_components[i].getScale()) / getWidthToFloat()); float wn = m_components[i].getX() - ( ((m_components[i].getSprite()->getWidth() / 2.0f) * m_components[i].getScale()) / getWidthToFloat()); if(w > widest) { widest = w; } if(wn > widest) { widest = wn; } if(w < widestNeg) { widestNeg = w; } if(wn < widestNeg) { widestNeg = wn; } } remainder = (2 * widest) - (widest - widestNeg); return widest - widestNeg; } And here is how I position and draw the bitmaps: int dw = m_components[i].getSprite()->getWidth() * m_components[i].getScale(); int dh = m_components[i].getSprite()->getHeight() * m_components[i].getScale(); int cx = (getWidth() + (m_remainderX * getWidthToFloat())) / 2; int cy = (getHeight() + (m_remainderY * getHeightToFloat())) / 2; cx -= m_remainderX * getWidthToFloat(); cy -= m_remainderY * getHeightToFloat(); int dx = cx + (m_components[i].getX() * getWidthToFloat()) - (dw / 2); int dy = cy + (m_components[i].getY() * getHeightToFloat()) - (dh / 2); g->drawScaledSprite(m_components[i].getSprite(),0.0f,0.0f, m_components[i].getSprite()->getWidth(),m_components[i].getSprite()->getHeight(),dx,dy, dw,dh,0); I basically store the difference between the original 2 * longest extent bitmap and the new optimized one, then I translate by that much which I would think would cause me to draw correctly but then some of the components look cut off. Any insight would help. Thanks

    Read the article

  • SharePoint 2010, Cloud, and the Constitution

    - by Michael Van Cleave
    The other evening an article on the Red Tap Chronicles caught my eye. The article written by Bob Sullivan titled "The Constitutional Issues of Cloud Computing" was very interesting in regards to the direction most of the technical world is going. We all have been inundated about utilizing cloud computing for reasons of price, availability, or even scalability; but what Bob brings up is a whole separate view of why a business might not want to move toward the cloud for services or applications. The overall point to the article was pretty simple. It all boiled down to the summation that hosting "Things" in the cloud (Email, Documents, etc…) are interpreted differently under the law regarding constitutional search and seizure than say a document or item that is kept in physical form at a business or home. Where if you physically have it stored someone would have to get a warrant to search for it or seize it, but if it is stored off in the cloud and the ISV or provider is subpoenaed for the item then they will usually give access to the information. Obviously this is a big difference in interpretation of the law and the constitution due to technology. So you might ask "Where does this fit in with SharePoint? Well the overall push for this next version of SharePoint is one that gives a business ultimate flexibility to utilize the Cloud. In one example this upcoming version gracefully lends itself to Multi Tenancy so that online or "Cloud" hosting would be possible by Service Providers. Another aspect to the upcoming version is that it has updated its ability to store content outside of the database and in a cheaper commoditized storage facility. This is called Remote Blob Storage (or RBS) which is the next evolution of External Blob Storage (or EBS). With this new functionality that business might look forward to it is extremely important for them to understand that they might be opening themselves up to laws that do not need a warrant to search or seize their information that is stored in the cloud. It will be interesting to see how this all plays out in the next few months. Usually the laws change slowly in comparison to technology so it might be a while until we see if it is actually constitutional to treat someone's content on the cloud differently as it would be in their possession, however until there is some type of parity that happens or more concrete laws regarding the differences be very careful about what you put in the cloud. Michael

    Read the article

  • Tales of a corrupt SQL log

    - by guybarrette
    Warning: I’m a simple dev, not an all powerful DBA with godly powers. This morning, one of my sites was down and DNN reported a problem with the database.  A quick series of tests revealed that the culprit was a corrupted log file. Easy fix I said, I have daily backups so it’s just a mater of restoring a good copy of the database and log files.  Well, I found out that’s not exactly true.  You see, for this database, I have daily file backups and these are not database backups created by SQL Server. So I restored a set of files from a couple of days ago, stopped the SQL service, copied the files over the bad ones, restarted the service only to find out that SQL doesn’t like when you do that.  It suspects something fishy and marks the database as suspect.  A database marked as suspect can’t be accessed at all.  So now what? I searched throughout the tubes of the InterWeb and found that you can restore from a corrupted log file by creating a new database with the same name as the defective one, then copy the restored database file (the one with data) over the newly created one.  Sweet!  But you still end up with SQL marking the database as suspect but at least, the newly created log is OK.  Well not true, it’s not corrupted but the lack of data makes it not OK for SQL so you need to rebuild the log.  How can you do that when SQL blocks any action the database?  First, you need to change the database status from suspect to emergency.  Then you need to set the database for single access only.  After that, you need to repair the log with DBCC and do the DBA dance.  If you dance long enough, SQL should repair the log file.  Now you need to set the access back to multi user.  Here’s the T-SQL script: use master GO EXEC sp_resetstatus 'MyDatabase' ALTER DATABASE MyDatabase SET EMERGENCY Alter database MyDatabase set Single_User DBCC checkdb('MyDatabase') ALTER DATABASE MyDatabase SET SINGLE_USER WITH ROLLBACK IMMEDIATE DBCC CheckDB ('MyDatabase', REPAIR_ALLOW_DATA_LOSS) ALTER DATABASE MyDatabase SET MULTI_USER So I guess that I would have been a lot easier to restore a SQL backup.  I can’t really say but the InterWeb seems to say so.  Anyway, lessons learned: Vive la différence: File backups are different then SQL backups. Don’t touch me: SQL doesn’t like when you restore a file over a corrupted one. The more the merrier: You should do both SQL and file backups. WTF?: The InterWeb provides you with dozens of way to deal with the problem but many are SQL 2000 or SQL 2005 only, many are confusing and many are written in strange dialects only DBAs understand. var addthis_pub="guybarrette";

    Read the article

  • Customer won't decide, how to deal?

    - by Crazy Eddie
    I write software that involves the use of measured quantities, many input by the user, most displayed, that are fed into calculation models to simulate various physical thing-a-majigs. We have created a data type that allows us to associate a numeric value with a unit, we call these "quantities" (big duh). Quantities and units are unique to dimension. You can't attach kilogram to a length for example. Math on quantities does automatic unit conversion to SI and the type is dimension safe (you can't assign a weight to a pressure for example). Custom UI components have been developed that display the value and its unit and/or allow the user to edit them. Dimensionless quantities, having no units, are a single, custom case implemented within the system. There's a set of related quantities such that our target audience apparently uses them interchangeably. The quantities are used in special units that embed the conversion factors for the related quantity dimensions...in other words, when using these units converting from one to another simply involves multiplying the value by 1 to the dimensional difference. However, conversion to/from the calculation system (SI) still involves these factors. One of these related quantities is a dimensionless one that represents a ratio. I simply can't get the "customer" to recognize the necessity of distinguishing these values and their use. They've picked one and want to use it everywhere, customizing the way we deal with it in special places. In this case they've picked one of the dimensions that has a unit...BUT, they don't want there to be a unit (GRR!!!). This of course is causing us to implement these special overrides for our UI elements and such. That of course is often times forgotten and worse...after a couple months everyone forgets why it was necessary and why we're using this dimensional value, calling it the wrong thing, and disabling the unit. I could just ignore the "customer" and implement the type as the dimensionless quantity, which makes most sense. However, that leaves the team responsible for figuring it out when they've given us a formula using one of the other quantities. We have to not only figure out that it's happening, we have to decide what to do. This isn't a trivial deal. The other option is just to say to hell with it, do it the customer's way, and let it waste continued time and effort because it's just downright confusing as hell. However, I can't count the amount of times someone has said, "Why is this being done this way, it makes no sense at all," and the team goes off the deep end trying to figure it out. What would you do? Currently I'm still attempting to convince them that even if they use terms interchangeably, we at the least can't do that within the product discussion. Don't have high hopes though.

    Read the article

  • Test your internet connection - Emtel Mobile Internet

    After yesterday's report on Emtel Fixed Broadband (I'm still wondering where the 'fixed' part is), I did the same tests on Emtel Mobile Internet. For this I'm using the Huawei E169G HSDPA USB stick, connected to the same machine. Actually, this is my fail-safe internet connection and the system automatically switches between them if a problem, let's say timeout, etc. has been detected on the main line. For better comparison I used exactly the same servers on Speedtest.net. The results Following are the results of Rose Hill (hosted by Emtel) and respectively Frankfurt, Germany (hosted by Vodafone DE): Speedtest.net result of 31.05.2013 between Flic en Flac and Rose Hill, Mauritius (Emtel - Mobile Internet) Speedtest.net result of 31.05.2013 between Flic en Flac and Frankfurt, Germany (Emtel - Mobile Internet) As you might easily see, there is a big difference in speed between national and international connections. More interestingly are the results related to the download and upload ratio. I'm not sure whether connections over Emtel Mobile Internet are asymmetric or symmetric like the Fixed Broadband. Might be interesting to find out. The first test result actually might give us a clue that the connection could be asymmetric with a ratio of 3:1 but again I'm not sure. I'll find out and post an update on this. It depends on network coverage Later today I was on tour with my tablet, a Samsung Galaxy Tab 10.1 (model GT-P7500) running on Android 4.0.4 (Ice Cream Sandwich), and did some more tests using the Speedtest.net app. The results are actually as expected and in areas with better network coverage you will get better results after all. At least, as long as you stay inside the national networks. For anything abroad, it doesn't really matter. But see for yourselves: Speedtest.net result of 31.05.2013 between Cascavelle and servers in Rose Hill, Mauritius (Emtel - Mobile Internet), Port Louis, Mauritius and Kuala Lumpur, Malaysia It's rather shocking and frustrating to see how the speed on international destinations goes down. And the full capability of the tablet's integrated modem (HSDPA: 21 Mbps; HSUPA: 5.76 Mbps) isn't used, too. I guess, this demands more tests in other areas of the island, like Ebene, Pailles or Port Louis. I'll keep you updated... The question remains: Alternatives? After the publication of the test results on Fixed Broadband I had some exchange with others on Facebook. Sadly, it seems that there are really no alternatives to what Emtel is offering at the moment. There are the various internet packages by Mauritius Telecom feat. Orange, like ADSL, MyT and Mobile Internet, and there is Bharat Telecom with their Bees offer which is currently limited to Ebene and parts of Quatre Bornes.

    Read the article

  • Oracle went back to school !....

    - by Cristina Ciocoiu
    I am Georgiana, Contracts Manager for Oracle University and Advanced Customer Services in Romania. I started working for Oracle for 4 years ago as a Contracts Specialist. Two years ago I became a manager of a team of 9 Contracts Specialists. On a sunny day in March some members of my team visited the students of the Academy of Economic Studies, accompanied by Recruitment colleagues. This was part of a new initiative to raise awareness on career opportunities at Oracle. We spent approximately 2 hours illustrating and explaining different aspects of the day-to-day activities of an Oracle Contracts Specialist to the future graduates of the Academy. Role Play Since a role play is worth 1000 job descriptions, the audience witnessed an entertaining performance on the contracting process from the phase of the negotiation with the customer to actual signing of the contract. The main focus was on the role of Contracts Specialist liaising with all the groups involved and ensuring that the contract is compliant with Oracle policies while generating the expected revenue. However, the team took other roles as well i.e. Sales Representative, Customer, Business Approver and Lawyer to demonstrate their role in the process. As each of these roles only have a small slice of the big pie, it is vital to understand what happens before and after you come on stage as a Contract Specialist. Contracts Specialist Being a Contracts Specialist goes beyond simply knowing what policies apply, it means understanding Oracle’s core business model, understanding customers’ requests and addressing them in the most effective way. The job also involves connecting smaller teams that are often geographically dispersed across multiple regions so that they become a bigger, stronger and successful team. You are the expert in this key position that can facilitate the closing of a deal or stop it from happening if the risk is too high. The role play provided insights on both. Why I love this job Events of this kind are sometimes just as useful for the “recruiters” as for the “recruits”. For me, as a presenter, it was an excellent opportunity to think about the many reasons why I love what I do in the Contracts department every day and to share this with the students. I wanted to explain to the audience, who are still considering education and career possibilities, that what we do in Contracts DOES make a difference. You have the power to achieve targets that you did not think reachable before. Working in the dynamic Oracle environment shapes you as a person and there is a lot to take away from this experience. Looking back to my years in the Academy (I graduated from the Academy myself), I wish I could have listened to more people talking about their great jobs and about how I could get there. If those were Oracle people I might have been writing this article sooner. J If you are interested to join the Contracts team please click here for more information or contact lavinia.protopopescu-AT-oracle-DOT-com. You can find all openings in Romania via http://campus.oracle.com

    Read the article

  • Five Best Practices for Going Mobile

    - by kellsey.ruppel
    76% of IT decision makers indicate mobile trends will have a high to extremely high impact on their organization. Has your organization gone mobile? Looking for some ideas on how to get started? John Brunswick shares his Best Practices for Going Mobile. Mobile technology has gone from nice-to-have to a cornerstone of user engagement. Mobile access enables social networking, decision support, purchasing, content consumption, and location-based searching, extending experiences beyond what is available in traditional desktop computing.  Organizations rushing to ensure their brand's mobile availability may have taken a tactical approach to implementation, but strategically approaching mobile can enable greater returns on a similar investment and subsequent mobile projects. Here are some strategic considerations for delivering products, services, and information to mobile constituents.  Who, Why, and What? Ask yourself these key questions: who are you attempting to engage through the channel, and why are they engaging you through this channel? What experience will satisfy their needs? What outcome will support your core business? Will you be informing and/or transacting with this person?  Mobile Behavior. Mobile users generally engage for a very specific purpose. Ensure that access to information, services, and products is streamlined. Arriving on a mobile site through search only to be asked to search again frustrates users.  Mobile Is Broad. After establishing the audience and goal, review technology requirements to support them. Do you need a mobile Website, native mobile application, or both? Do you need to support multiple devices? Know the difference between native mobile and mobile Web.  Social Strategy. Users are more likely to trust reviews from peers than marketing information from a vendor. If you are selling products or services, be sure to make social integration part of your strategy.  Content Management. Consider a shared content platform strategy for Web and mobile projects. Fresh, consistent content is important for high-quality experiences. Read more from John Brunswick.We'll also be talking mobile strategies and how you can transform your portal experience and optimize online engagement -- making your portals more interactive and more engaging across multiple channels in a webcast tomorrow. We hope you'll join us!

    Read the article

  • box2d resize bodies arround point

    - by philipp
    I have a compound object, consisting of a b2Body, vector-graphics and a list polygons which describe the b2body's shapes. This object has its own transformation matrix to centralize the storage of transformations. So far everything is working quiet fine, even scaling works, but not if i scale around a point. In the initialization phase of the object it is scaled around a point. This happens in this order: transform the main matrix transform the vector graphics and the polygons recreate the b2Body After this function ran, the shapes and all the graphics are exactly where they should be, BUT: after the first steps of the b2World the graphical stuff moves away from the body. When I ran the debugger I found out that the position of the body is 0/0 the red dot shows the center of scaling. the first image shows the basic setup and the second the final position of the graphics. This distance stays constant for the rest of the simulation. If I set the position via myBody.SetPosition( sx, sy ); the whole scenario just plays a bit more distant for the origin. Any Idea how to fix this? EDIT:: I came deeper down to the problem and it lies in the fact that i must not scale the transform matrix for the b2body shapes around the center, but set the b2body's position back to the point after scaling. But how can I calculate that point? EDIT 2 :: I came ever deeper down to it, even solved it, but this is a slow solution and i hope that there is somebody who understands what formula I need. assuming to have a set polygons relative to an origin as basis shapes for a b2body: scaling the whole object around a certain point is done in the following steps: i scale everything around the center except the polygons i create a clone of the polygons matrix i scale this clone around the point i calculate dx, dy as difference of clone.tx - original.tx and clone.ty - original.ty i scale the original polygon matrix NOT around the point i recreate the body i create the fixture i set the position of the body to dx and dy done! So what i an interested in is a formula for dx and dy without cloning matrices, scaling the clone around a point, getting dx and dy and finally scale the vertex matrix.

    Read the article

  • links for 2011-02-21

    - by Bob Rhubart
    Calling all enterprise architects | Enterprise architecture - InfoWorld Nominations are now open for the 2011 InfoWorld Enterprise Architecture Award, honoring companies whose enterprise architecture initiatives made a difference (tags: ping.fm) Red Tape, Part II : OTN Garage "How do you back up all of that storage? Tape: really fast tape. And, lots of it. This creates a whole variety of very interesting challenges today, elevating the topic to – at the very least – glamorous, but I think it qualifies as being downright hot!" - Kemer Thomson (tags: oracle entarch datastorage) The Buttso Blathers: Using Secure Config Files with the WebLogic Maven Plugin "WebLogic Server has long had a mechanism to provide a more secure way of connecting to the Administration Server from client utilities such that the username and password do not need to be specified and therefore can’t be seen from the process list or command shell history." (tags: oracle weblogic) World-class EA | Open Group Blog "World-class Enterprise Architecture is all about creating definitive collateral that defines how the architecture delivers value for societal value." - Mick Adams (tags: enterprisearchitecture entarch opengroup) Enterprise Process Maps: A Process Picture worth a Million Words (Telecommunications Architecture Corner) "Every BPM project (holistic BPM kick-off, enterprise system implementation, Service-oriented Architecture, business process transformation, corporate performance management, etc.) should be begin with a clear understanding of the business environment..." - Raul Goycoolea (tags: oracle otn telecommunications businessprocess entarch bpm) Andrejus Baranovskis's Blog: WebCenter PS3 Customization Manager- Long Awaited Feature for MDS Oracle ACE Director Andrejus Baranovski shares "really great news for those of you who are working on MDS personalization and customization support in Oracle Fusion Middleware applications." (tags: oracle otn oracleace webcenter enterprise2.0) Oracle WebCenter: Common User Experience Architecture (Oracle Enterprise 2.0 Blog) Kellsey Ruppel describes "how the new release of Oracle WebCenter delivers a Common User Experience Architecture." (tags: oracle otn webcenter enterprise2.0) Java / Oracle SOA blog: Do your SOA deployments & configuration with AIA Oracle ACE Edwin Biemond illustrates the use of the SOA Suite / FMW deployment framework, "one of the Application Integration Architecture (AIA) hidden gems." (tags: oracle oracleace soa otn fusionmiddleware) Enterprise Software Development with Java: Clustering Stateful Session Beans with GlassFish 3.1 Oracle ACE Director Markus Eisele describes what he did "to get a Stateful Session Bean failover scenario working with two instances on one node." (tags: oracle otn oracleace glassfish) Enhanced REST Support in Oracle Service Bus 11gR1 (SOA Thinker) Jeff Davies illustrates how to re-implement the REST-ful Products services using query strings for passing parameter information. (tags: oracle otn soa REST)

    Read the article

  • Cheating on Technical Debt

    - by Tony Davis
    One bad practice guaranteed to cause dismay amongst your colleagues is passing on technical debt without full disclosure. There could only be two reasons for this. Either the developer or DBA didn’t know the difference between good and bad practices, or concealed the debt. Neither reflects well on their professional competence. Technical debt, or code debt, is a convenient term to cover all the compromises between the ideal solution and the actual solution, reflecting the reality of the pressures of commercial coding. The one time you’re guaranteed to hear one developer, or DBA, pass judgment on another is when he or she inherits their project, and is surprised by the amount of technical debt left lying around in the form of inelegant architecture, incomplete tests, confusing interface design, no documentation, and so on. It is often expedient for a Project Manager to ignore the build-up of technical debt, the cut corners, not-quite-finished features and rushed designs that mean progress is satisfyingly rapid in the short term. It’s far less satisfying for the poor person who inherits the code. Nothing sends a colder chill down the spine than the dawning realization that you’ve inherited a system crippled with performance and functional issues that will take months of pain to fix before you can even begin to make progress on any of the planned new features. It’s often hard to justify this ‘debt paying’ time to the project owners and managers. It just looks as if you are making no progress, in marked contrast to your predecessor. There can be many good reasons for allowing technical debt to build up, at least in the short term. Often, rapid prototyping is essential, there is a temporary shortfall in test resources, or the domain knowledge is incomplete. It may be necessary to hit a specific deadline with a prototype, or proof-of-concept, to explore a possible market opportunity, with planned iterations and refactoring to follow later. However, it is a crime for a developer to build up technical debt without making this clear to the project participants. He or she needs to record it explicitly. A design compromise made in to order to hit a deadline, be it an outright hack, or a decision made without time for rigorous investigation and testing, needs to be documented with the same rigor that one tracks a bug. What’s the best way to do this? Ideally, we’d have some kind of objective assessment of the level of technical debt in a software project, although that smacks of Science Fiction even as I write it. I’d be interested of hear of any methods you’ve used, but I’m sure most teams have to rely simply on the integrity of their colleagues and the clear perceptions of the project manager… Cheers, Tony.

    Read the article

  • Unexpected results for projection on to plane

    - by ravenspoint
    I want to use this projection matrix: GLfloat shadow[] = { -1,0,0,0, 1,0,-1,1, 0,0,-1,0, 0,0,0,-1 }; It should cast object shadows onto the y = 0 plane from a point light at 1,1,-1. I create a rectangle in the x = 0.5 plane glBegin( GL_QUADS ); glVertex3f( 0.5,0.2,-0.5); glVertex3f( 0.5,0.2,-1.5); glVertex3f( 0.5,0.5,-1.5); glVertex3f( 0.5,0.5,-0.5); glEnd(); Now if I manually multiply these vertices with the matrix, I get. glBegin( GL_QUADS ); glVertex3f( 0.375,0,-0.375); glVertex3f( 0.375,0,-1.625); glVertex3f( 0,0,-2); glVertex3f( 0,0,0); glEnd(); Which produces a reasonable display ( camera at 0,5,0 looking down y axis ) So rather than do the calculation manually, I should be able to use the opengl model transormation. I write this code: glMatrixMode (GL_MODELVIEW); GLfloat shadow[] = { -1,0,0,0, 1,0,-1,1, 0,0,-1,0, 0,0,0,-1 }; glLoadMatrixf( shadow ); glBegin( GL_QUADS ); glVertex3f( 0.5,0.2,-0.5); glVertex3f( 0.5,0.2,-1.5); glVertex3f( 0.5,0.5,-1.5); glVertex3f( 0.5,0.5,-0.5); glEnd(); But this produces a blank screen! What am I doing wrong? Is there some debug mode where I can print out the transformed vertices, so I can see where they are ending up? Note: People have suggested that using glMultMatrixf() might make a difference. It doesn't. Replacing glLoadMatrixf( shadow ); with glLoadIdentity(); glMultMatrixf( shadow ); gives the identical result ( of course! )

    Read the article

  • Turn Chrome’s New Tab Page into an Ubuntu Forums Powerhouse

    - by Asian Angel
    There’s a lot of start page plug-ins for Google Chrome, but if you’re an Ubuntu Forums enthusiast, you might be interested in the powerful UbuntuForums.org Start Page extension. Using the Ubuntu Forums Chrome Start Page Once you’ve installed the extension and opened a new tab, you’ll notice quite a difference from the boring default “New Tab Page”. While you may look at this and wonder if what you see is all that there is to it, there’s actually a lot of hidden customization and functionality. The upper left corner is where you will control the content displayed in the “Ubuntu New Tab Page”. The “Buttons Toolbar” that you see at the top lets you shift between the types of content viewed, from RSS feeds to bookmarks and more. This entire set of links provides direct links to the appropriate section in the Ubuntu Forums. As with the set of links pictured above each of these will open in a new tab when clicked on. There’s even a feature to browse wallpapers from DesktopNexus with categories on the left and pictures to the right. If you want to return to the regular layout you should use the “Start Page Link” highlighted here. Options The options will allow you to customize the colors shown in the “Ubuntu New Tab Page” to create a very nice match to your current browser and/or system theme if desired. You can also do some customization to the fonts, “Bookmark Shortcuts”, and populate the “Custom Links Section” in the main page. Conclusion If you are an Ubuntu enthusiast then this will be a very useful extension to add to your browser. The wealth of direct links and built-in functionality make this extension worth trying. Links Download the Ubuntuforums.org Start Page extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Turn Off Auto-Play of Audio and Video CDs and DVDs in UbuntuScroll Backwards From the Ubuntu Server Command LineAnnouncing the How-To Geek ForumsDisable the System Beep on Ubuntu EdgyEnable Smooth fonts on Ubuntu Linux TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow

    Read the article

  • CAM v2.0 ships – all new foundation version

    - by drrwebber
    The latest release of the CAM editor toolset is now available on Sourceforge.net – search NIEM. In this all new version the support from Oracle has enabled a transformation of the editor underpinning Java framework and results in 3x performance improvement and 50% better memory utilization. The result of nearly six months of improvements are catalogued in the release notes. http://sourceforge.net/projects/camprocessor/files/CAM%20Editor/Releases/2.0/CAM_Editor_2-0_Release_Notes.pdf/download However here I’d like to talk about the strategic vision and highlight specific new go to features that make a difference for exchange schema designers and with a focus on the NIEM community. So why is this a foundation version? Basically the new drag and drop designer tool allows you to tailor your own dictionary collection of components and then simply select and position those into your resulting exchange structure. This is true global reuse enabled from a canonical domain dictionary collection. So instead of grappling with XSD Schema syntax, or UML model nuances – this is straightforward direct WYSIWYG visual engineering – using familiar sets of business components. Then the toolkit writes the complex XSD Schema for you, along with test samples, documentation, XMI/UML models, Mindmaps and more. So how do you get a set of business components? The toolkit allows you to harvest these from existing schema collections or enterprise data models, or as in the case of NIEM, existing domain dictionary collections. I’ve been using this for the latest IEEE/OASIS/NIST initiative on a Common Data Format (CDF) for elections management systems. So you can download those from OASIS and see how this can transform how you build actual business exchanges – improving the quality, consistency and usability – and dramatically allowing automated generation of artifacts you only dreamed of before – such as a model of your entire major exchange collection components. http://www.oasis-open.org/committees/documents.php?wg_abbrev=election So what we have here is a foundation version – setting the scene and the basis for changing how people can generate and manage information exchanges. A foundation built using the OASIS CAM standard combined with aspects of the NIEM Naming and Design Rules and the UN/CEFACT Core Components specifications and emerging work on OASIS CIQ name and address and ANSI/ISO code list schema. We still have a raft of work to do to integrate this into SOA best practices and extend the dictionary capabilities to assist true community development. Answering questions such as: - How good is my canonical component collection? - How much reuse is really occurring? - What inconsistencies and extensions are there in the dictionary components? Expect us to begin tackling these areas now that the foundation is in place. The immediate need is to develop training and self-start materials – so we will be focusing there for the next couple of months and especially leading up to the IJIS industry event in July in New Jersey, and the NIEM NTE event in August in Philadelphia. http://sourceforge.net/projects/camprocessor

    Read the article

  • Misaligned Display on Resume

    - by Shaun Killingbeck
    I have an odd issue with my laptop display when resuming from suspend. When I have an additional monitor connected there is no issue. However without an additional monitor connected, after resuming only the left 10% of the laptop screen (just enough to show the Unity Launcher and a bit more) is visibly working, although strangely in a screenshot this same 10% is shown on the right hand side of the screenshot: I ran xrandr --verbose before and after resume, and the only difference (using diff) was: 2c2 < LVDS connected 1366x768+0+0 (0x98) normal (normal left inverted right x axis y axis) 344mm x 194mm --- > LVDS connected 1366x768+1280+0 (0x98) normal (normal left inverted right x axis y axis) 344mm x 194mm This seems to suggest the screen position has been shifted by 1280 horizontally, the width of the second monitor I use. Indeed, running the command xrandr --output LVDS --pos 0x0 does bring the screen back to normal. However, I don't want to have to run this command every time, I'd prefer to cure the source of the problem than just correct the symptoms. Any ideas on how to get Ubuntu to keep the display configuration settings from before suspend when it resumes? or why it changes at all? Heres some technical details that might be pertinent: HP Pavilion DV6 Laptop Ubuntu 13.04 AMD Radeon HD 6400M Series AMD Radeon HD 6520G Using proprietary flgrx-updates driver and amdcccle (Catalyst Control Center) (Unfortunately the open source driver causes my laptop to run even hotter than it already does, otherwise I'd use that) The contents of Xorg.conf: Section "ServerLayout" Identifier "amdcccle Layout" Screen 0 "amdcccle-Screen[0]-0" 0 0 EndSection Section "Module" Load "glx" EndSection Section "Monitor" Identifier "0-LVDS" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" Option "PreferredMode" "1280x768" Option "TargetRefresh" "60" Option "Position" "0 0" Option "Rotate" "normal" Option "Disable" "false" EndSection Section "Monitor" Identifier "0-CRT1" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" Option "PreferredMode" "1280x768" Option "TargetRefresh" "60" Option "Position" "0 0" Option "Rotate" "normal" Option "Disable" "false" EndSection Section "Monitor" Identifier "1-LVDS" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" Option "TargetRefresh" "60" Option "Position" "1280 0" Option "Rotate" "normal" Option "Disable" "false" Option "PreferredMode" "1366x768" EndSection Section "Monitor" Identifier "1-CRT1" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" Option "TargetRefresh" "60" Option "Position" "0 0" Option "Rotate" "normal" Option "Disable" "false" Option "PreferredMode" "1280x1024" EndSection Section "Device" Identifier "amdcccle-Device[0]-0" Driver "fglrx" Option "Monitor-LVDS" "1-LVDS" Option "Monitor-CRT1" "1-CRT1" BusID "PCI:0:1:0" EndSection Section "Device" Identifier "amdcccle-Device[0]-1" Driver "fglrx" Option "Monitor-LVDS" "1-LVDS" BusID "PCI:0:1:0" Screen 1 EndSection Section "Screen" Identifier "Default Screen" DefaultDepth 24 EndSection Section "Screen" Identifier "amdcccle-Screen[0]-0" Device "amdcccle-Device[0]-0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Virtual 2646 2646 Depth 24 EndSubSection EndSection Section "Screen" Identifier "amdcccle-Screen[0]-1" Device "amdcccle-Device[0]-1" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection

    Read the article

  • Thinkpad brightness steps error using FN+Home/End

    - by petermolnar
    I've met the following problem: normally my T400 (Lenovo Thinkpad) has 16 steps of brightness, and Windows utilizes it correctly. After a fresh install & minor tweaks Mint 12 (which is based on 11.10 Ubuntu) I only had 6 steps which was way to few. Listing /sys/class/backlight showed 3 entried. I removed the acpi-tools package, one of the disapperared - and I now have 10 steps! Therefore I think if I can reduce the entries to 1 I'm going to have 16 steps, since the stepping will be 1 instead of 2 (or 3). /sys/class/backlight/ intel_backlight -> ../../devices/pci0000:00/0000:00:02.0/drm/card0/card0-LVDS-1/intel_backlight thinkpad_screen -> ../../devices/virtual/backlight/thinkpad_screen The problem is that I'm unable to trace back what are the configs / daemons / kernel options triggers these two. More strangely, I discovered a strange behaviour. I monitored watch -n1 "cat /sys/class/backlight/thinkpad_screen/actual_brightness" and watch -n1 "cat /sys/class/backlight/intel_backlight/actual_brightness" while changing the brightness with FN+home/end combinations from max to min. The outcome is the following: brighness intel thinkpad --------- ----- -------- MAX 2408475 7 | 1955115 5 | 1435640 3 | 1246740 1 | 1086175 0 | 1010615 6 | 859495 4 | 689485 2 v 481695 0 MIN 217235 0 brighness intel thinkpad --------- ----- -------- MIN 217235 0 | 481695 2 | 689485 4 | 859495 6 | 1010615 7 | 1086175 1 | 1246740 3 | 1435640 5 v 1955115 7 MAX 2408475 0 When stepping from MIN to MAX, there's no difference between the last 2 steps. Also, the OSD icon (Cinnamon desktop, default theme) goes from full to min in 4 steps and from full to min once again in 4 steps. So... it seems that the intel entry is working correctly, showing correct values. The thinkpad entry however twists the things and even showing incorrect values. Does anyone have any idea how to get rid of the thinkpad entry? System data: Linux Mint 12 3.0.0-16 kernel Lenovo ThinkPad T400 Cinnamon 1.4 desktop For any additional info, please tell me what do you need. EDIT I'm sorry, I forgot to mention, I added acpi_backlight=vendor to GRUB cmdline as well, this is the result of the semi-better working than the default.

    Read the article

  • Make Browsing Safer for Children in Google Chrome

    - by Asian Angel
    If you are worried about the websites that your children could accidentally visit while browsing, then you may want to have a look at the Kid Safe – LinkExtend extension for Google Chrome. Kid Safe – LinkExtend in Action Before going any further you may want to have a quick look at the options. Everything is enabled by default but it is recommended that you disable the “Allow entering unsafe sites Option”. For our first example we visited “chatroulette.com”. As you can see in the screenshot WOT and McAfee SiteAdvisor gave the website a “green rating” but when it came specifically to its’ level of appropriateness for children LinkExtend gave it a “yellow rating”. Our second example was “hotbabes.com”…obviously not a good website for any child to visit. You can see that the entire window area has been totally “blacked out” and the available information for this site from each of the six ratings sources. The “Toolbar Button” is also displaying a “red rating”… Notice the two links at the bottom of the ratings screen…both will be visible if the “Allow entering unsafe sites Option” is not disabled (see Options above). You can see the difference for the links at the bottom of the ratings screen if you have the “Allow entering unsafe sites Option” disabled. Definitely much much better… Clicking on the “Find Kids Sites Link” will navigate the tab to the Yahoo! Kids website. The extension will also place “ratings buttons” beside search results at Google. As you can see in the screenshot below not all of the results had information available for them at this time. But it is certainly a lot better than nothing at all when it comes to keeping your children safe. A close-up look at the ratings for one of the search results. Conclusion While no browser add-in makes for a perfect solution the Kid Safe – LinkExtend extension will definitely be a helpful addition to your family’s Chrome browser. Links Download the Kid Safe – LinkExtend extension (Google Chrome Extensions) Similar Articles Productive Geek Tips How to Make Google Chrome Your Default BrowserAccess Browsing History in Google Chrome the Easy WayFocused New Tabs Quick-Fix for Google ChromeVisually Browse Through Your Open Tabs in Google ChromeSubscribe to RSS Feeds in Chrome with a Single Click TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job? Find Downloads and Add-ins for Outlook Recycle !

    Read the article

  • Interconnect nodes in a Java distributed infrastructure for tweet processing

    - by David Moreno García
    I'm working in a new version of an old project that I used to download and process user statuses from Twitter. The main problem of that project was its infrastructure. I used multiple instances of a java application (trackers) to download from Twitter given an specific task (basically terms to search for), connected with a central node (a web application) that had to process all tweets once per day and generate a new task for each trackers once each 15 minutes. The central node also had to monitor all trackers and enable/disable them under user petition. This, as I said, was too slow because I had multiple bottlenecks, so in this new version I want to improve the infrastructure and isolate all functionalities in specific nodes. I also need a good notification system to receive notifications for any node. So, in the next diagram I show the components that I'll need in this new version: As you can see, there are more nodes. Here are some notes about them: Dashboard: Controls trackers statuses and send a single task to each of them (under user request). The trackers will use this task until replaced with a new one (if done, not each 15 minutes like before). Search engine: I need to store all the tweets. They are firstly stored in a local database for each tracker but after that I'm thinking on using something like Elasticsearch to be able to do fast searches. Tweet processor: Just and isolated component with its own database (maybe something like the search engine to have fast access to info generated by the module). In the future more could be added. Application UI: A web application with a shared database with the Dashboard (mainly to store users information and preferences). Indeed, both could be merged into a single web. The main difference with the previous version of the project is that now they will be isolated and they will only show information and send requests. I will not do any heavy task in them (like process tweets as I did before). So, having this components, my main headache is how to structure all to not have to rewrite a lot of code every time I need to access any new data. Another headache is how can I interconnect nodes. I could use sockets but that is a pain in the ass. Maybe a REST layer? And finally, if all the nodes are isolated, how could I generate notifications for each user which info is only in the database used by the Application UI? I'm programming this using Java and Spring (at least I used them in the last version) but I have no problems with changing the language if I can take advantage of a tool/library/engine to make my life easier and have a better platform. Any comment will be appreciated.

    Read the article

  • Remove Clutter from the Opera Speed Dial Page

    - by Asian Angel
    Do you want to clean up the Speed Dial page in Opera so that only the thumbnails are visible? Today we show you a couple of tweaks that will make it happen. Speed Dial Page The search bar and text at the bottom take up room and add clutter to the look and feel of Opera’s Speed Dial page. Changing the Settings Two small tweaks to the config settings will clean it all up. To get started type opera:config into the address bar and press enter. Type “speed” into the quick find bar and look for the Speed Dial State entry. Change the 1 to 2 and click save. You will see the following message concerning the changes…click OK. Next type “search” into the quick find bar and look for the Speed Dial Search Type entry. Remove all of the text in the blank and click save. Once again you will see a message about the latest change that you have made. At this point you may need to restart Opera for both changes to take full effect. There will be a noticeable difference in how the Speed Dial page looks afterwards and is much cleaner without the Search bar and text field. You will also still be able to access the right click context menu just like before. Conclusion If you have been looking to get a cleaner and less cluttered Speed Dial page in Opera, then these two little hacks will get the job done! Similar Articles Productive Geek Tips Set the Speed Dial as the Opera Startup PageReplace Google Chrome’s New Tab Page with Speed DialSpeed up Windows Vista Start Menu Search By Limiting ResultsBlank New Tab Quick-Fix for Google ChromeMonitor and Control Memory Usage in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Backup Outlook 2010 Daily Motivator (Firefox) FetchMp3 Can Download Videos & Convert Them to Mp3 Use Flixtime To Create Video Slideshows Creating a Password Reset Disk in Windows Bypass Waiting Time On Customer Service Calls With Lucyphone

    Read the article

  • Server-infrastructure recommendations

    - by Tim van Elsloo
    Here's the thing: I need a cheap, fast, reliable infrastructure that can dynamically scale (like Amazon S3: cloud-storage). I'm thinking of 3 different type of 'servers'. Application-server Should be able to run CentOS (or another light Linux-distr.) Should be able to run Apache Should be able to run PHP Should be able to run GD (so it does rely on it's cpu). Should be extremely reliable and fast. Database-server Should be able to run MySQL Should be able to... well, do nothing else :P. Should be extremely reliable and fast. Storage-server Should be able to run some kind of file-transfer-deamon (like FTP, CouchDB, etc.) Should be able to do nothing else. Should be extremely reliable and fast. So technically, by transferring all static data to 2 different servers/services, the application-server can totally focus on the webpages. My questions: What services do you recommend? Which is cheaper, faster and more reliable: using my own server, or using some cloud-storage/cloud-computing-service (like Amazon S3, CloudFiles, etc.)? How can I prevent bandwidth abuse (such as dos-attacks causing the bill to be extremely high)? What's the difference between "including CDN" and "excluding CDN"? It seems the price doesn't differ at CloudFiles? Do you have to pay "including CDN" + "excluding CDN" when you decide to enable the delivery-network? Or have you only got to pay "including CDN"? Should I use my own nameserver too or can I use my domain-hoster's nameservers? What are the minimum software specifications of a nameserver. Can I write some software myself? Does anyone have a good protocol-description? I hope you can answer my questions. Answers I shouldn't write my own nameserver-software. Instead, I should use something like bind. (http://osspro.com/2010/05/04/linux-create-your-own-domain-name-server-dns/).

    Read the article

  • Unable to boot ubuntu 11.10 from external usb drive

    - by user45006
    I'm new to Ubuntu (and actually all things Linux) as of this morning, so please excuse any stupid mistakes I may be making. I recently bought an external hard drive for my newly built PC (that is running windows 7 if it matters). I would like to install Ubuntu onto the external drive and boot from there. I downloaded Ubuntu 11.10 and made a bootable cd, unplugged my internal HDDs, plugged in the external drive, installed Ubuntu 11.10 on the external drive via the installer, and replugged in my internal HDDs. Then I set my bios boot order to: Boot from USB-HDD - Boot from Hard Disk - Boot from CD/DVD. Now when I restart I get the message "Starting Operating System..." (or something like that, I forget exactly what it says) that lingers on the screen for a moment and then windows starts. Any idea what the problem may be? ~Relevant info~ BIOS version: Award Software International, Inc. F2, 2/22/2011 Ubuntu Version: 11.10 External Hard Drive: Western Digital My Passport Essential 500GB Portable Hard Drive (Black) ~Things I've already tried~ 1) Unplug internal HDDs so that only external drive was plugged in via usb. Same thing happened only obviously my BIOS could not detect any hard drives besides the external one. When booting received error "Could not detect operating system" 2) Formatted external hard drive and re installed. It didn't make a difference, however interestingly when I booted from cd the ubuntu installer said it detected ubuntu 11.10 on the external hard drive. 3) Within BIOS I've messed around with every boot order combo I could think of both in the "Hard Disk boot order" screen and the "Boot order" screen. I'm a little confused of why there are two screens for this. 4) Held F12 during startup which opens (what I think is) the one time boot screen and it gave me the options "Hard Drive", "cd/dvd", "USB-FDD", "USB-cdrom", "USB-HDD", and "USB-something else I can't remember what it was". I tried all of them, but the same thing as before happened each time. ~References~ I noticed several people on askubuntu have tried to do something similar if not the exact same. In fact, I even found a post that pretty much outlines step by step exactly what I did... only their's worked. /jealous. Linky: Install Ubuntu or Kubuntu on a External USB Drive I'm willing to try a different version of ubuntu - it's not like my heart is set on 11.10, but it's a pain to open my case and unplug my internal hard drives so I'd prefer not to do this unless someone is reasonably confident it'll work. Thank you for all of your help in advance! I'm really looking forward to exploring Ubuntu!

    Read the article

< Previous Page | 281 282 283 284 285 286 287 288 289 290 291 292  | Next Page >