Search Results

Search found 25123 results on 1005 pages for 'domain model'.

Page 833/1005 | < Previous Page | 829 830 831 832 833 834 835 836 837 838 839 840  | Next Page >

  • I can't install Ubuntu 12.04.1 on iMac G5

    - by user89004
    So, I have this iMac G5 that doesn't have iSight, only a small light sensor I think undernieth, machine model 8.2. I tried burning a Ubuntu 12.04.1 PowerPC 64bit .iso to a cd but the computer just won't boot it, I don't know why. Next I tried with a USB but it wouldn't let me boot that either, I created the usb on my dad's win7 laptop as the process was way easier than on freakin Mac or Ubuntu (no command typing AT ALL on windows) I'm able to get into openfirmware and type boot usb and it does show some weird writing that scrolls so fast I can't see anything and then it just gives me this huge no sign like a stop sign and freezez. The sign is grey and the line in the middle is tilted towards the left. An other issue I'm having with hdiutil is that I can't convert the stupid .iso I just downloaded into a .img because the file keeps on dissapearing right when it's done converting it. I used the syntax from Ubuntu support how to create a bootable usb drive under Mac OS X. I even didn't include the 2 stupid ~ that are shown in the syntax that are completly worthless, God only know why they put them there, and I even tried running the whole thing as root with sudo su before the command. The funny thing is that if I convert something smaller it works. The command I was using is hdiutil convert -format UDRW -o /path/to/target.img /path/to/ubuntu.iso I even tried hdiutil convert /path/to/ubuntu.iso -format UDRW -o /path/to/target.img but the same thing happens, the dummy .img.dmg file dissapears when the conversion is done no matter where I set the output file to go. I have tried several different folders, the same thing happens with all of them. I also tried burning a Ubuntu mini iso on a cd, can't remember if it was 11.10 or 12.10 but even thoguh holding c when the iMac boots up does show me the cd and I can boot from it, I get this weird error upon hitting install, it says something like invalid memory access, release keys and error strings I can't read. I don't have any original DVDs from this iMac and can't run hardware diagnostics. WHatever option I try at the command prompt from the mini ubuntu cd I get the same result, error code and openfirmware backdrop that's frozen. I noticed that the pen drive I created on my dads Win7 laptop is formated with MS-DOS but I can still mount it no problem, so it shouldn't have a problem booting it, right? I used the advice on ubuntu.com to make it, from here. Also, my partition is HFS+ so I can't use it as a hard drive and boot from it. I don' have 2 partitions either, just one HDD, one partition. Please help!!!

    Read the article

  • Programming Language, Turing Completeness and Turing Machine

    - by Amumu
    A programming language is said to be Turing Completeness if it can successfully simulate a universal TM. Let's take functional programming language for example. In functional programming, function has highest priority over anything. You can pass functions around like any primitives or objects. This is called first class function. In functional programming, your function does not produce side effect i.e. output strings onto screen, change the state of variables outside of its scope. Each function has a copy of its own objects if the objects are passed from the outside, and the copied objects are returned once the function finishes its job. Each function written purely in functional style is completely independent to anything outside of it. Thus, the complexity of the overall system is reduced. This is referred as referential transparency. In functional programming, each function can have its local variables kept its values even after the function exits. This is done by the garbage collector. The value can be reused the next time the function is called again. This is called memoization. A function usually should solve only one thing. It should model only one algorithm to answer a problem. Do you think that a function in a functional language with above properties simulate a Turing Machines? Functions (= algorithms = Turing Machines) are able to be passed around as input and returned as output. TM also accepts and simulate other TMs Memoization models the set of states of a Turing Machine. The memorized variables can be used to determine states of a TM (i.e. which lines to execute, what behavior should it take in a give state ...). Also, you can use memoization to simulate your internal tape storage. In language like C/C++, when a function exits, you lose all of its internal data (unless you store it elsewhere outside of its scope). The set of symbols are the set of all strings in a programming language, which is the higher level and human-readable version of machine code (opcode) Start state is the beginning of the function. However, with memoization, start state can be determined by memoization or if you want, switch/if-else statement in imperative programming language. But then, you can't Final accepting state when the function returns a value, or rejects if an exception happens. Thus, the function (= algorithm = TM) is decidable. Otherwise, it's undecidable. I'm not sure about this. What do you think? Is my thinking true on all of this? The reason I bring function in functional programming because I think it's closer to the idea of TM. What experience with other programming languages do you have which make you feel the idea of TM and the ideas of Computer Science in general? Can you specify how you think?

    Read the article

  • Oracle went back to school !....

    - by Cristina Ciocoiu
    I am Georgiana, Contracts Manager for Oracle University and Advanced Customer Services in Romania. I started working for Oracle for 4 years ago as a Contracts Specialist. Two years ago I became a manager of a team of 9 Contracts Specialists. On a sunny day in March some members of my team visited the students of the Academy of Economic Studies, accompanied by Recruitment colleagues. This was part of a new initiative to raise awareness on career opportunities at Oracle. We spent approximately 2 hours illustrating and explaining different aspects of the day-to-day activities of an Oracle Contracts Specialist to the future graduates of the Academy. Role Play Since a role play is worth 1000 job descriptions, the audience witnessed an entertaining performance on the contracting process from the phase of the negotiation with the customer to actual signing of the contract. The main focus was on the role of Contracts Specialist liaising with all the groups involved and ensuring that the contract is compliant with Oracle policies while generating the expected revenue. However, the team took other roles as well i.e. Sales Representative, Customer, Business Approver and Lawyer to demonstrate their role in the process. As each of these roles only have a small slice of the big pie, it is vital to understand what happens before and after you come on stage as a Contract Specialist. Contracts Specialist Being a Contracts Specialist goes beyond simply knowing what policies apply, it means understanding Oracle’s core business model, understanding customers’ requests and addressing them in the most effective way. The job also involves connecting smaller teams that are often geographically dispersed across multiple regions so that they become a bigger, stronger and successful team. You are the expert in this key position that can facilitate the closing of a deal or stop it from happening if the risk is too high. The role play provided insights on both. Why I love this job Events of this kind are sometimes just as useful for the “recruiters” as for the “recruits”. For me, as a presenter, it was an excellent opportunity to think about the many reasons why I love what I do in the Contracts department every day and to share this with the students. I wanted to explain to the audience, who are still considering education and career possibilities, that what we do in Contracts DOES make a difference. You have the power to achieve targets that you did not think reachable before. Working in the dynamic Oracle environment shapes you as a person and there is a lot to take away from this experience. Looking back to my years in the Academy (I graduated from the Academy myself), I wish I could have listened to more people talking about their great jobs and about how I could get there. If those were Oracle people I might have been writing this article sooner. J If you are interested to join the Contracts team please click here for more information or contact lavinia.protopopescu-AT-oracle-DOT-com. You can find all openings in Romania via http://campus.oracle.com

    Read the article

  • Two interfaces with identical signatures

    - by corsiKa
    I am attempting to model a card game where cards have two important sets of features: The first is an effect. These are the changes to the game state that happen when you play the card. The interface for effect is as follows: boolean isPlayable(Player p, GameState gs); void play(Player p, GameState gs); And you could consider the card to be playable if and only if you can meet its cost and all its effects are playable. Like so: // in Card class boolean isPlayable(Player p, GameState gs) { if(p.resource < this.cost) return false; for(Effect e : this.effects) { if(!e.isPlayable(p,gs)) return false; } return true; } Okay, so far, pretty simple. The other set of features on the card are abilities. These abilities are changes to the game state that you can activate at-will. When coming up with the interface for these, I realized they needed a method for determining whether they can be activated or not, and a method for implementing the activation. It ends up being boolean isActivatable(Player p, GameState gs); void activate(Player p, GameState gs); And I realize that with the exception of calling it "activate" instead of "play", Ability and Effect have the exact same signature. Is it a bad thing to have multiple interfaces with an identical signature? Should I simply use one, and have two sets of the same interface? As so: Set<Effect> effects; Set<Effect> abilities; If so, what refactoring steps should I take down the road if they become non-identical (as more features are released), particularly if they're divergent (i.e. they both gain something the other shouldn't, as opposed to only one gaining and the other being a complete subset)? I'm particularly concerned that combining them will be non-sustainable as soon as something changes. The fine print: I recognize this question is spawned by game development, but I feel it's the sort of problem that could just as easily creep up in non-game development, particularly when trying to accommodate the business models of multiple clients in one application as happens with just about every project I've ever done with more than one business influence... Also, the snippets used are Java snippets, but this could just as easily apply to a multitude of object oriented languages.

    Read the article

  • Data Auditor by Example

    - by Jinjin.Wang
    OWB has a node Data Auditors under Oracle Module in Projects Navigator. What is data auditor and how to use it? I will give an introduction to data auditor and show its usage by examples. Data auditor is an important tool in ensuring that data quality levels meet business requirements. Data auditor validates data against a set of data rules to determine which records comply and which do not. It gathers statistical metrics on how well the data in a system complies with a rule by auditing and marking how many errors are occurring against the audited table. Data auditors are typically scheduled for regular execution as part of a process flow, to monitor the quality of the data in an operational environment such as a data warehouse or ERP system, either immediately after updates like data loads, or at regular intervals. How to use data auditor to monitor data quality? Only objects with data rules can be monitored, so the first step is to define data rules according to business requirements and apply them to the objects you want to monitor. The objects can be tables, views, materialized views, and external tables. Secondly create a data auditor containing the objects. You can configure the data auditor and set physical deployment parameters for it as optional, which will be used while running the data auditor. Then deploy and run the data auditor either manually or as part of the process flow. After execution, the data auditor sets several output values, and records that are identified as not complying with the defined data rules contained in the data auditor are written to error tables. Here is an example. We have two tables DEPARTMENTS and EMPLOYEES (see pic-1 and pic-2. Click here for DDL and data) imported into OWB. We want to gather statistical metrics on how well data in these two tables satisfies the following requirements: a. Values of the EMPLOYEES.EMPLOYEE_ID attribute are three-digit numbers. b. Valid values for EMPLOYEES.JOB_ID are IT_PROG, SA_REP, SH_CLERK, PU_CLERK, and ST_CLERK. c. EMPLOYEES.EMPLOYEE_ID is related to DEPARTMENTS.MANAGER_ID. Pic-1 EMPLOYEES Pic-2 DEPARTMENTS 1. To determine legal data within EMPLOYEES or legal relationships between data in different columns of the two tables, firstly we define data rules based on the three requirements and apply them to tables. a. The first requirement is about patterns that an attribute is allowed to conform to. We create a Domain Pattern List data rule EMPLOYEE_PATTERN_RULE here. The pattern is defined in the Oracle Database regular expression syntax as ^([0-9]{3})$ Apply data rule EMPLOYEE_PATTERN_RULE to table EMPLOYEES.

    Read the article

  • Unexpected results for projection on to plane

    - by ravenspoint
    I want to use this projection matrix: GLfloat shadow[] = { -1,0,0,0, 1,0,-1,1, 0,0,-1,0, 0,0,0,-1 }; It should cast object shadows onto the y = 0 plane from a point light at 1,1,-1. I create a rectangle in the x = 0.5 plane glBegin( GL_QUADS ); glVertex3f( 0.5,0.2,-0.5); glVertex3f( 0.5,0.2,-1.5); glVertex3f( 0.5,0.5,-1.5); glVertex3f( 0.5,0.5,-0.5); glEnd(); Now if I manually multiply these vertices with the matrix, I get. glBegin( GL_QUADS ); glVertex3f( 0.375,0,-0.375); glVertex3f( 0.375,0,-1.625); glVertex3f( 0,0,-2); glVertex3f( 0,0,0); glEnd(); Which produces a reasonable display ( camera at 0,5,0 looking down y axis ) So rather than do the calculation manually, I should be able to use the opengl model transormation. I write this code: glMatrixMode (GL_MODELVIEW); GLfloat shadow[] = { -1,0,0,0, 1,0,-1,1, 0,0,-1,0, 0,0,0,-1 }; glLoadMatrixf( shadow ); glBegin( GL_QUADS ); glVertex3f( 0.5,0.2,-0.5); glVertex3f( 0.5,0.2,-1.5); glVertex3f( 0.5,0.5,-1.5); glVertex3f( 0.5,0.5,-0.5); glEnd(); But this produces a blank screen! What am I doing wrong? Is there some debug mode where I can print out the transformed vertices, so I can see where they are ending up? Note: People have suggested that using glMultMatrixf() might make a difference. It doesn't. Replacing glLoadMatrixf( shadow ); with glLoadIdentity(); glMultMatrixf( shadow ); gives the identical result ( of course! )

    Read the article

  • Partial Submit vs. Auto Submit

    - by Frank Nimphius
    Partial Submit ADF Faces adds the concept of partial form submit to JavaServer Faces 1.2 and beyond. A partial submit actually is a form submit that does not require a page refresh and only updates components in the view that are referenced from the command component PartialTriggers property. Another option for refreshing a component in response to a partial submit is call AdfContext.getCurrentInstance.addPartialTarget(component_instance_handle_goes_here)in a managed bean. If a form contains required fields that the user left empty invoking the partial submit, then errors are shown for each of the field as the full form gets submitted. Autosubmit An input component that has its autosubmit property set to true also performs a partial submit of the form. However, this time it doesn't submit the entire form but only the component that triggers the submit plus components referenced it in their PartialTriggers property. For example, consider a form that has three input fields inpA, inpB and inpC with autosubmit=true set on inpA and required=true set on inpB and inpC. use case 1: Running the view, entering data into inpA and then tabbing out of the field will submit the content for inpA but not for inpB and inpC. Further more, none of the required field settings on inpB and inpC causes an error. use case 2: You change the configuration of inpC and set its PartialTriggers property to point to the ID of component inpA. When rerunning the sample, entering a value into inpA and tabbing out of the field will now submit the inpA and inpC fields and thus show an error for the missing required value on inpC. Internally, using autosubmit=true on an input component sets the event root to just this field, which good to have in case of dependent field validation or behavior. The event root can extended to include other components by using the Partial Triggers property on these components to point to the input field that has autosubmit=true defined. PartialSubmit vs. AutoSubmit Partial submit set on a command component submits the whole form and leaves it to the developer to decide which UI component is refreshed in response. Client side required field validation (as well as the server side equivalent) is not disabled by executed in this scenario. Setting immediate=true on the command item to skip validation doesn't help as it would also skip the model update. Auto submit is a functionality on the input components and also performs a partial form submit. However, in addition an event root is defined that narrows the scope for the submitted data and thus the components that are validated on the request. To read more about this topic, see: http://docs.oracle.com/cd/E23943_01/web.1111/b31973/af_lifecycle.htm#CIAHCFJF

    Read the article

  • Play or Lift: which one is more explicit?

    - by Andrea
    I am going to investigate web development with Scala, and the choice is between learning Lift or Play: probably I will not have enough time to try both, at least at first. Now, many comparisons between the two are available on the internet, but I would like to know how do they compare with respect to being explicit and involving less magic. Let me explain what I mean by example. I have used, to various degrees, CakePHP, symfony2, Django and Grails. I feel a very clear distinction between Django and symfony2, which are very explicit about what you are doing, and Grails and CakePHP, which try to do their best to guess what you are trying to achieve and often feel "magical". Let me give some examples comparing Django and Grails. In Django, views are functions that take a request as input and return a response. You can instantiate explicitly an instance of HttpResponse and populate its body with a string, or you can use shortcut functions to leverage the template system. In any case the return value from your view always has the same type. In contrast, the render method from Grails is highly polymorphic. You can throw a context at it and it will try to render a template which is found by convention using that context. Or you can pass it a pair of a template path and a context and that will work too. Or a string. Or XML. Grails tries hard to make sense of whatever you return from your controller. In the Django ORM, each model class has a static attribute representing the manager for that class. That manager exposes a fluent interface to build querysets. In Grails, you can have a similar functionality by composing detached criteria. Still, the most common way to query objects seems to be the use of runtime-generated methods like FindUserByEmailNotNull or FindPostByDateGreaterThan. I will not go further, but my point is that in Django-like frameworks you have control over the whole flow of the request/response process, while in Grails-like ones I feel I only have to feel the blanks and the framework will manage the rest of the flow for me. This is not to criticize Grails or CakePHP; which type you prefer is mainly a matter of preference. In fact, I happen to like some aspects of Grails, but I feel more comfortable with a framework which does less for me. Back to the point of the question: which one among Play and Lift is more explicit about what you do and which one tries to simplify more what you have to do with a layer of "magic"?

    Read the article

  • Cheating on Technical Debt

    - by Tony Davis
    One bad practice guaranteed to cause dismay amongst your colleagues is passing on technical debt without full disclosure. There could only be two reasons for this. Either the developer or DBA didn’t know the difference between good and bad practices, or concealed the debt. Neither reflects well on their professional competence. Technical debt, or code debt, is a convenient term to cover all the compromises between the ideal solution and the actual solution, reflecting the reality of the pressures of commercial coding. The one time you’re guaranteed to hear one developer, or DBA, pass judgment on another is when he or she inherits their project, and is surprised by the amount of technical debt left lying around in the form of inelegant architecture, incomplete tests, confusing interface design, no documentation, and so on. It is often expedient for a Project Manager to ignore the build-up of technical debt, the cut corners, not-quite-finished features and rushed designs that mean progress is satisfyingly rapid in the short term. It’s far less satisfying for the poor person who inherits the code. Nothing sends a colder chill down the spine than the dawning realization that you’ve inherited a system crippled with performance and functional issues that will take months of pain to fix before you can even begin to make progress on any of the planned new features. It’s often hard to justify this ‘debt paying’ time to the project owners and managers. It just looks as if you are making no progress, in marked contrast to your predecessor. There can be many good reasons for allowing technical debt to build up, at least in the short term. Often, rapid prototyping is essential, there is a temporary shortfall in test resources, or the domain knowledge is incomplete. It may be necessary to hit a specific deadline with a prototype, or proof-of-concept, to explore a possible market opportunity, with planned iterations and refactoring to follow later. However, it is a crime for a developer to build up technical debt without making this clear to the project participants. He or she needs to record it explicitly. A design compromise made in to order to hit a deadline, be it an outright hack, or a decision made without time for rigorous investigation and testing, needs to be documented with the same rigor that one tracks a bug. What’s the best way to do this? Ideally, we’d have some kind of objective assessment of the level of technical debt in a software project, although that smacks of Science Fiction even as I write it. I’d be interested of hear of any methods you’ve used, but I’m sure most teams have to rely simply on the integrity of their colleagues and the clear perceptions of the project manager… Cheers, Tony.

    Read the article

  • The Buzz at the JavaOne Bookstore

    - by Janice J. Heiss
    I found my way to the JavaOne bookstore, a hub of activity. Who says brick and mortar bookstores are dead? I asked what was hot and got two answers: Hadoop in Practice by Alex Holmes was doing well. And Scala for the Impatient by noted Java Champion Cay Horstmann also seemed to be a fast seller. Hadoop in PracticeHadoop is a framework that organizes large clusters of computers around a problem. It is touted as especially effective for large amounts of data, and is use such companies as  Facebook, Yahoo, Apple, eBay and LinkedIn. Hadoop in Practice collects nearly 100 Hadoop examples and presents them in a problem/solution format with step by step explanations of solutions and designs. It’s very much a participatory book intended to make developers more at home with Hadoop.The author, Alex Holmes, is a senior software engineer with more than 15 years of experience developing large-scale distributed Java systems. For the last four years, he has gained expertise in Hadoop solving Big Data problems across a number of projects. He has presented at JavaOne and Jazoon and is currently a technical lead at VeriSign.At this year’s JavaOne, he is presenting a session with VeriSign colleague, Karthik Shyamsunder called “Java: A Perfect Platform for Data Science” where they will explain how the Java platform has emerged as a perfect platform for practicing data science, and also talk about such technologies as Hadoop, Hive, Pig, HBase, Cassandra, and Mahout. Scala for the ImpatientSan Jose State University computer science professor and Java Champion Cay Horstmann is the principal author of the highly regarded Core Java. Scala for the Impatient is a basic, practical introduction to Scala for experienced programmers. Horstmann has a presentation summarizing the themes of his book on at his website. On the final page he offers an enticing summary of his conclusions:* Widespread dissatisfaction with Java + XML + IDEs               --Don't make me eat Elephant again * A separate language for every problem domain is not efficient               --It takes time to master the idioms* ”JavaScript Everywhere” isn't going to scale* Trend is towards languages with more expressive power, less boilerplate* Will Scala be the “one ring to rule them”?* Maybe              --If it succeeds in industry             --If student-friendly subsets and tools are created The popularity of both books echoed comments by IBM Distinguished Engineer Jason McGee who closed his part of the Sunday JavaOne keynote by pointing out that the use of Java in complex applications is increasingly being augmented by a host of other languages with strong communities around them – JavaScript, JRuby, Scala, Python and so forth. Java developers increasingly must know the strengths and weaknesses of such languages going forward.

    Read the article

  • Software monetization that is not evil

    - by t0x1n
    I have a free open-source project with around 800K downloads to date. I've been contacted by some monetization companies from time to time and turned them down, since I didn't want toolbar malware associated with my software. I was wondering however, is there a non-evil way to monetize software ? Here are the options as I know them: Add a donation button. I don't feel comfortable with that as I really don't need "donations" - I'm paid quite well. Donating users may feel entitled to support etc. (see the second to last bullet) Add ads inside your application. In the web that may be acceptable, but in a desktop program it looks incredibly lame. Charge a small amount for each download. This model works well in the mobile world, but I suspect no one will go for it on the desktop. It doesn't mix well with open source, though I suppose I could charge only for the binaries (most users won't go to the hassle of compiling the sources). People may expect support etc. after having explicitly paid (see next bullet). Make money off a service / community / support associated with the program. This is one route I definitely don't want to take, I don't want any sort of hassle beyond coding. I assure you, the program is top notch (albeit simple) and I'm not aware of any bugs as of yet (there are support forums and blog comments where users may report them). It is also very simple, documented, and discoverable so I do think I have a case for supplying it "as is". Add affiliate suggestions to your installer. If you use a monetization company, you lose control over what they propose. Unless you can establish some sort of strong trust with the company to supply quality suggestions (I sincerely doubt it), I can't have that. Choosing your own affiliate (e.g. directly suggesting Google Toolbar) is possibly the only viable solution to my mind. Problem is, where do I find a solid affiliate that could actually give value to the user rather than infect his computer with crapware? I thought maybe Babylon (not the toolbar of course, I hate toolbars)?

    Read the article

  • Questions about identifying the components in MVC

    - by luiscubal
    I'm currently developing an client-server application in node.js, Express, mustache and MySQL. However, I believe this question should be mostly language and framework agnostic. This is the first time I'm doing a real MVC application and I'm having trouble deciding exactly what means each component. (I've done web applications that could perhaps be called MVC before, but I wouldn't confidently refer to them as such) I have a server.js that ties the whole application together. It does initialization of all other components (including the database connection, and what I think are the "models" and the "views"), receiving HTTP requests and deciding which "views" to use. Does this mean that my server.js file is the controller? Or am I mixing code that doesn't belong there? What components should I break the server.js file into? Some examples of code that's in the server.js file: var connection = mysql.createConnection({ host : 'localhost', user : 'root', password : 'sqlrevenge', database : 'blog' }); //... app.get("/login", function (req, res) { //Function handles a GET request for login forms if (process.env.NODE_ENV == 'DEVELOPMENT') { mu.clearCache(); } session.session_from_request(connection, req, function (err, session) { if (err) { console.log('index.js session error', err); session = null; } login_view.html(res, user_model, post_model, session, mu); //I named my view functions "html" for the case I might want to add other output types (such as a JSON API), or should I opt for completely separate views then? }); }); I have another file that belongs named session.js. It receives a cookies object, reads the stored data to decide if it's a valid user session or not. It also includes a function named login that does change the value of cookies. First, I thought it would be part of the controller, since it kind of dealt with user input and supplied data to the models. Then, I thought that maybe it was a model since it dealt with the application data/database and the data it supplies is used by views. Now, I'm even wondering if it could be considered a View, since it outputs data (cookies are part of HTTP headers, which are output)

    Read the article

  • Restoring MSDB

    - by David-Betteridge
    We recently performed a disaster recovery exercise which included the restoration of the MSDB database onto our DR server.  I did a quick google to see if there were any special considerations and found the following MS article.  Considerations for Restoring the model and msdb Databases (http://msdn.microsoft.com/en-us/library/ms190749(v=sql.105).aspx).   It said both the original and replacement servers must be on the same version,  I double-checked and in my case they are both SQL Server 2008 R2 SP1 (10.50.2500).. So I went ahead and stopped SQL Server agent, restored the database and restarted the agent.  Checked the jobs and they were all there, everything looked great, and was until the server was rebooted a few days later.Then the syspolicy_purge_history job started failing on the 3rd step with the error message “Unable to start execution of step 3 (reason: The PowerShell subsystem failed to load [see the SQLAGENT.OUT file for details]; The job has been suspended). The step failed.”   A bit more googling pointed me to the msdb.dbo.syssubsystems table SELECT * FROM msdb.dbo.syssubsystems WHERE start_entry_point ='PowerShellStart'   And in particular the value for the subsystem_dll. It still had the path to the SQLPOWERSHELLSS.DLL but on the old server. The DR instance has a different name to the live instance and so the paths are different.   This was quickly fixed with the following SQL Use msdb; GO sp_configure 'allow updates', 1 ; RECONFIGURE WITH OVERRIDE ; GO UPDATE msdb.dbo.syssubsystems SET subsystem_dll='C:\Program Files\Microsoft SQL Server\MSSQL10_50.DR\MSSQL\binn\SQLPOWERSHELLSS.DLL' WHERE start_entry_point ='PowerShellStart'; GO sp_configure 'allow updates', 0; RECONFIGURE WITH OVERRIDE ; GO Stopped and started SQL Server agent and now the job completes.   I then wondered if anything else might be broken, SELECT subsystem_dll FROM msdb.dbo.syssubsystems Shows a further 10 wrong paths – fortunately for parts of SQL (replication, SSIS etc) we aren’t using! Lessons Learnt 1.       DR exercises are a good thing! 2.       Keep the Live and DR environments as similar as possible.    

    Read the article

  • links for 2011-01-31

    - by Bob Rhubart
    Do (Software) Architects Architect? "The first question, is 'Why is architect being used as a verb?' Mirriam-Webster dictionary does not contain a definition of architect as a verb, nor do many other recognized dictionaries." -- TheCPUWizard (tags: softwarearchitecture) Oracle Business Intelligence Blog: Gartner Magic Quadrant for BI Platforms 2011 "Oracle customers indicate they deploy the Oracle Business Intelligence Suite Enterprise Edition (OBIEE) platform to support among the most complex deployments in our survey." - Gartner (tags: oracle businessintelligence gartner) Oracle BI Server Modeling, Part 1- Designing a Query Factory (Oracle BI Foundation) Bob Ertl lays the groundwork for Business Intelligence modeling concepts with a look at "the big picture of how the BI Server fits into the system, and how the CEIM controls the query processing." (tags: oracle otn businessintelligence) Tom Graves: Modelling people in enterprise-architecture Tom says: "One of the key characteristics of ‘crossing the chasm’ to a viable whole-of-enterprise architecture is the explicit inclusion of people. In short, we need to be able to model and map where people fit in relation to the architecture. But there’s a catch. A big catch." (tags: entarch) Java developer webcasts for customers and partners (SOA Partner Community Blog) Jurgen Kress shares info on several upcoming online events focused on WebLogic. (tags: weblogic oracle otn soa) Business SOA: Data Services are bogus, Information services are real Steve Jones says: "The other day when I was talking about MDM a bright spark pointed out that I hated data services but wasn't MDM just about data services?" (tags: SOA MDM) Andrejus Baranovskis's Blog: Configuring Missing Contribution Folders for Oracle UCM 11g and WebCenter 11g PS3 Andrejus says: "After doing some research on UCM, we found that Folders_g component must be configured in UCM, for Contribution Folders to be enabled." (tags: oracle otn oracleace UCM webcenter enterprise2.0) Wim Coekaerts: Converting an Oracle VM VirtualBox VM into an Oracle VM Server image Wim Coekaerts offers a few simple steps to convert an existing Oracle VM VirtualBox image.  (tags: oracle otn virtualization virtualbox) Stefan Hinker: Secure Deployment of Oracle VM Server for SPARC This new paper from Stefan Hinker will help you understand the general security concerns in virtualized environments as well as the specific additional threats that arise out of them. (tags: oracle otn SPARC virtualization enterprisearchitecture) The EA Roadmap to Rationalize, Standardize, and Consolidate the IT Portfolio Enterprise IT is in a state of constant evolution. As a result, business processes and technologies become increasingly more difficult to change and more costly to keep up-to-date. (tags: entarch oracle otn)

    Read the article

  • Live CD has black screen HP DV6

    - by Shaun Killingbeck
    Attempting to install/try ubuntu (11.10, 12.04) on my new laptop, using a liveCD (and tried USB). I get the purple screen (with the man/keyboard at the bottom) and after that the screen flashes bright white before going black. Ubuntu continues to load in the background, with login sound etc but the screen is off. I have tried as many different solutions as I could find including: using nomodestep, xforcevesa, i915.modeset=0, and also now i915.modeset=1 in boot options (seperately): varying consequences, but either I end up at a blinking cursor with no prompt, a command line (startx fails: no screen found), or the original blank screen again Tried booting from VirtualBox - it crashes at the same place the screen would go blank when using a CD/USB tried 11.04: I don't have this problem BUT when trying to install, I get a ubi-partman error 141 (possibly down to the three partitions that came on my laptop... not sure why HP needed there own separate partition for HP Tools...) Model: HP Pavillion DV6 6B08SA Processor: AMD Quad-Core A6-3410MX APU with Radeon HD 6545G2 Dual Graphics (1.6 GHZ 4 MB L2 cache ) Chipset: AMD RS880M Any help would be greatly appreciated. I just want to be able to partition the drive and install Ubuntu. I'm assuming the issue is graphics card related, although I have no confirmation of that. Update: Tried the ?orkarounds on https://wiki.ubuntu.com/X/Troubleshooting/BlankScreen - set gfxpayload=text changed nothing, removing splash did nothing and setting vesafb.nonsense=1 did nothing either. I'd like to be able to collect some log information somehow, but I can't get to a command line from the liveCD. tried using the latest 12.04 beta, same issue tried nomodeset without splash or quiet. get the following (tail of) output before it freezes on that screen: * Starting configure network device security [OK] * Starting configure network device [OK] [ 25.720899] ieee80211 phy0: w1_ops_config: change monitor mode: false (implement) [ 25.720923] ieee80211 phy0: w1_ops_config: change power-save mode: false (implement) * Starting restore sound card(s') mixer state(s) [fail] [ 25.721849] ieee80211 phy0: w1_ops_bss_info_changed: qos enabled: false (implement) * Stopping save kernel messages [OK] * Starting bluetooth [OK] * PulseAudio configured for per-user sessions saned disabled; edit /etc/default/saned [ 25.988016] hci_cmd_timer: hci0 command tx timeout [ 26.207225] bad LUN (0:1) [ 26.223735] bad target number (1:0) [ 26.252111] bad target number (2:0) [ 26.272170] bad target number (3:0) [ 26.300154] bad target number (4:0) [ 26.328162] bad target number (5:0) [ 26.344180] bad target number (6:0) [ 26.368142] bad target number (7:0) * Checking battery state... [OK] * Stopping System V runlevel capability [OK] Does this give any indication of the problem? the false (implement) messages also reappear when I press the power button to ask it to shutdown, followed by a [fail] status for killing remaining processes.

    Read the article

  • links for 2010-12-22

    - by Bob Rhubart
    @hajonormann: BPM: Top Seven Architectural Topics in 2010 Oracle ACE Director Hajo Normann offers details on how to design a BPM/SOA solution including: modeling human interaction, improving BPM models, orchestrating composed services, central task management, new approaches for business-IT alignment, solutions for non-deterministic processes, and choreography. (tags: oracle otn soasymposium infoq soa bpm) InfoQ: Simplicity, The Way of the Unusual Architect Dan North talks about the tendency developers-becoming-architects have to create bigger and more complex systems. Without trying to be simplistic, North argues for simplicity, offering strategies to extract the simple essence from complex situations. (tags: ping.fm) Fun with Sun Ray, 3D, Oracle VM x86 and SRIOV (Wim Coekaerts Blog) "One of the things I like about my job is that I get to play around with stuff and make use of the technologies we work on in my teams. Sort of my own little playground." - Wim Coekaerts (tags: oracle otn virtualization oraclevm) Oracle VM VirtualBox 4.0.0 Released! (Oracle's Virtualization Blog) And you were worried about what to get that special someone for Christmas... (tags: oracle otn virtualization virtualbox) Virtual Developer Day: Oracle WebLogic Server & Java EE (#OTNVDD) (Oracle Technology Network Blog (aka TechBlog)) "Virtual Developer Day is back with a vengeance! On Feb. 1, login to learn how Oracle WebLogic Server enables a whole new level of productivity for enterprise developers." Registration is open. (tags: oracle otn events webinar java) New Coherence 3.6 Oracle University Course (Cristóbal Soto's Blog) Cristóbal Soto shares information on the "Oracle Coherence 3.6: Share and Manage Data in Clusters" course now available through Oracle University. (tags: oracle otn grid coherence) The Aquarium: Oracle WebLogic Server & Java EE developer day "Oracle WebLogic is well on its way to contribute to the general Java EE 6 momentum and the OTN Blog has just announced a Virtual Developer Day for Oracle WebLogic." (tags: oracle otn weblogic java) Enterprise 2.0 Use Cases for Semantic Web (Reiser 2.0) "How can an enterprise improve the efficiency and effectiveness of their Knowledge and Community model leveraging semantic technologies and social networking dynamics?" - Peter Reiser (tags: oracle otn enterprise2.0 semanticweb) John Gøtze: European Interoperability Framework 2.0 "This week, the European Commission announced an updated interoperability policy in the EU. The Commission has committed itself to adopt a Communication that introduces the European Interoperability Strategy (EIS) and an update to the European Interoperability Framework (EIF)..." - John Gøtze (tags: entarch Interoperability) Andy Mulholland: Maybe Web 3.0 is quite understandable – and a natural result "The idea of Web 1.0 = content, Web 2.0 = people and Web 3.0 = services has a nice symmetrical feel to it, in fact it feels basically right as such a definition would include the two other major definitions as well. So if we put these things all together what picture do we see?" - Andy Mulholland (tags: web2.0 web3.0) Ken Downs: A Working Definition of Business Logic, with Implications for CRUD Code "The Wikipedia entry on 'Business Logic' has a wonderfully honest opening sentence stating that 'Business logic, or domain logic, is a non-technical term...'"  (tags: businesslogic crud)

    Read the article

  • Updating My Online Boggle Solver Using jQuery Templates and WCF

    With WebForms, each ASP.NET page's rendered output includes a <form> element that performs a postback to the same page whenever a Button control within the form is clicked, or whenever the user modifies a control whose AutoPostBack property is set to True. This model simplifies web page development, but carries with it some costs - namely, the large amount of data exchanged between the client and the server during a postback. On postback the browser sends the values of all of its form fields (including hidden ones, like view state, which may be quite large) to the server; the server then sends back the entire contents of the web page. While there are some scenarios where this amount of information needs to be exchanged, in many cases the user has performed some action that requires far less information to be exchanged. With a little bit of forethought and code we can have the browser and server exchange much less data, which leads to more responsive web pages and an improved user experience. Over the past several weeks I've been writing an article series on accessing server-side data from client script. Rather than rely solely on forms and postbacks, many websites use JavaScript code to asynchronously communicate with the server in response to the page loading or some other user action. The server, upon receiving the JavaScript-initiated request, returns just the data needed by the browser, which the browser then seamlessly integrates into the web page. There are a variety of technologies and techniques that can be employed to provide both the needed server- and client-side functionality. Last week's article, Using WCF Services with jQuery and the ASP.NET Ajax Library, explored using the Windows Communication Foundation, or WCF, to serve data from the web server and showed how to consume such a service using both the ASP.NET Ajax Library and jQuery. In a previous 4Guys article, Creating an Online Boggle Solver, I built an application to find all solutions in a game of Boggle. (Boggle is a word game trademarked by Parker Brothers and Hasbro that involves several players trying to find as many words as they can in a 4x4 grid of letters.) This article takes the lessons learned in Using WCF Services with jQuery and the ASP.NET Ajax Library and uses them to update the user interface for my online Boggle solver, replacing the existing WebForms-based user interface with a more modern and responsive interface. I also used jQuery Templates, a JavaScript-based templating library that is useful for displaying the results from a server-side service. Read on to learn more! Read More >

    Read the article

  • cloud programming for OpenStack in C / C++

    - by Basile Starynkevitch
    (Sorry for such a fuzzy question, I am very newbie to cloud programming) I am interested in designing (and developing) a (free software) program in C or C++ (probably, most of it being meta-programmed, i.e. part of the C code code being generated). I am still in the thinking / designing phase. And I might perhaps give up. For reference, I am the main architect and implementor of GCC MELT, a domain specific language to extend the GCC compiler (the MELT language is translated to C/C++ and is bootstrapped: the MELT to C/C++ translator being written in MELT). And I am dreaming of extending it with some cloud computing abilities. But I am a newbie in cloud computing. (I am only interested in free-software, GPLv3 friendly, based cloud computing, which probably means openstack). I believe that "compiling on the cloud with some enhanced GCC" could make sense (for super-optimizations or static analysis of e.g. an entire Linux distribution, or at least a massive GCC compiled free software like Qt, GCC itself, or the Linux kernel). I'm dreaming of a MELT specific monitoring program which would store, communicate, and and enhance GCC compilation (extended by MELT). So the picture would be that each GCC process (actually the cc1 or cc1plus started by the gcc driver, suitably extended by some MELT extension) would communicate with some monitor. That "monitoring/persisting" program would run "on the cloud" (and probably manage some information produced by GCC e.g. on NoSQL bases). So, how should some (yet to be written) C program (some Linux daemon) be designed to be cloud-friendly? So far, I understood that it should provide some Web service, probably thru a RESTful service, so should use an HTTP server library like onion. And that OpenStack is able to start (e.g. a dozen of) such services. But I don't have a clear picture of what OpenStack brings. So far, I noticed the ability to manage (and distribute) virtual machines (with some Python API). It is less clear how can it distribute some ELF executable, how can it start it, etc. Do you have any references or examples of C / C++ programming on the cloud? How should a "cloud-friendly" (actually, OpenStack friendly) C/C++ server application be designed?

    Read the article

  • Newsletter sent with drupal goes to Spam Folder [closed]

    - by HerrSerker
    Possible Duplicate: How could I prevent my mail from being recognized as spam? I'm sending a newsletter with drupals simplenews module The website is hosted on an 1und1 server in germany (as seen in in header domains online.de and kundenserver.de) When I send it, it goes to Spam folder in Yahoo & GMail Mailbox, but not in Spam Folder in web.de, hotmail and GMX Mailboxes Here is, what I have in the Mail Header (for yahoo in this example) Received: from 12.345.678.90 (EHLO sXXXXXXXXX.online.de) (12.345.678.90) by mtaXXX.mail.kks.yahoo.co.jp with SMTP; Fri, 15 Jun 2012 18:45:24 +0900 Received: from [127.0.0.1] (helo=infongdXXXXX.rtr.kundenserver.de) by sXXXXXXXXX.online.de with esmtp (Exim 4.72) (envelope-from <[email protected]>) id 1SfT5k-00068r-Q8 for [email protected]; Fri, 15 Jun 2012 11:45:20 +0200 Received: from 83.136.130.41 (IP may be forged by CGI script) by infongdXXXXX.rtr.kundenserver.de with HTTP id 0Z04SW-1SQTKp3LPr-00YxYk; Fri, 15 Jun 2012 11:45:20 +0200 From: SENDER <[email protected]> To: "[email protected]" <[email protected]> Date: Fri, 15 Jun 2012 11:45:20 +0200 Subject: This is the subject of the newsletter Thread-Topic: This is the subject of the newsletter Thread-Index: Ac1K3nT42juzo7uCSkq5dTlby1ZvpQ== List-Unsubscribe: <http://www.example.com/newsletter/confirm/remove/XXXXXXXXX> X-MS-Has-Attach: X-Auto-Response-Suppress: All X-MS-TNEF-Correlator: x-originating-ip: [12.345.678.90] authentication-results: mtaXXX.mail.kks.yahoo.co.jp from=example.com; domainkeys=neutral (no sig); dkim=neutral (no sig) [email protected] errors-to: "SENDER" <[email protected]> received-spf: none (sXXXXXXXXX.online.de: domain of [email protected] does not designate permitted sender hosts) x-apparently-to: [email protected] via 123.45.67.890; Fri, 15 Jun 2012 18:45:25 +0900 x-sender-info: <[email protected]> content-length: 13762 Content-Type: multipart/alternative; boundary="_000_7471797868716571796675707173696675806577726778666766687_" MIME-Version: 1.0 I cannot see any direct spam filter message in this. But I'm kind of stunned by the Received: from 83.136.130.41 (IP may be forged by CGI script) part. After I searched a bit, it seems, that this is a special 'feature' of 1und1 Mail servers. Here are my questions: Is it possible that, if I get rid of the 'Ip maybe forged' part, that the Mail is not regarded as spam anymore? If so, Does anyone know, how I can get rid of it in drupal?

    Read the article

  • How to export 3D models that consist of several parts (eg. turret on a tank)?

    - by Will
    What are the standard alternatives for the mechanics of attaching turrets and such to 3D models for use in-game? I don't mean the logic, but rather the graphics aspects. My naive approach is to extend the MD2-like format that I'm using (blender-exported using a script) to include a new set of properties for a mesh that: is anchored in another 'parent' mesh. The anchor is a point and normal in the parent mesh and a point and normal in the child mesh; these will always be colinear, giving the child rotation but not translation relative to the parent point. has a normal that is aligned with a 'target'. Classically this target is the enemy that is being engaged, but it might be some other vector e.g. 'the wind' (for sails and flags (and smoke, which is a particle system but the same principle applies)) or 'upwards' (e.g. so bodies of riders bend properly when riding a horse up an incline etc). that the anchor and target alignments have maximum and minimum and a speed coeff. there is game logic for multiple turrets and on a model and deciding which engages which enemy. 'primary' and 'secondary' or 'target0' ... 'targetN' or some such annotation will be there. So to illustrate, a classic tank would be made from three meshes; a main body mesh, a turret mesh that is anchored to the top of the main body so it can spin only horizontally and a barrel mesh that is anchored to the front of the turret and can only move vertically within some bounds. And there might be a forth flag mesh on top of the turret that is aligned with 'wind' where wind is a function the engine solves that merges environment's wind angle with angle the vehicle is travelling in an velocity, or something fancy. This gives each mesh one degree of freedom relative to its parent. Things with multiple degrees of freedom can be modelled by zero-vertex connecting meshes perhaps? This is where I think the approach I outlined begins to feel inelegant, yet perhaps its still a workable system? This is why I want to know how it is done in professional games ;) Are there better approaches? Are there formats that already include this information? Is this routine?

    Read the article

  • Five Fake Sounds Engineered to Make Your Feel Better [Science]

    - by Jason Fitzpatrick
    As objects in our environment (like cars, ATMs, and phones) have grown lighter and quieter scientists have been carefully engineering their sounds so that they continue to sound like we expect them to. Read on to see how. At the design blog Humans Invent they share five interesting ways that the world around us is being engineered so it sounds the way we expect it to. They start with the example of the car door. Years ago cars were almost entirely steel, the doors were weighty, and when you slammed them it sounded like one big hunk of steel locking into another big hunk of steel (which, in fact, it was). Newer cars are lighter but people still crave that substantial clunk. Humans Invent highlights the effect of consumer desire: A car door is essentially a hollow shell with parts placed inside it. Without careful design the door frame amplifies the rattling of mechanisms inside. Car companies know that if buyers don’t get a satisfying thud when they close the door, it dents their confidence in the entire vehicle. To produce the ideal clunk, car doors are designed to minimise the amount of high frequencies produced (we associate them with fragility and weakness) and emphasise low, bass-heavy frequencies that suggest solidity. The effect is achieved in a range of different ways – car companies have piled up hundreds of patents on the subject – but usually involves some form of dampener fitted in the door cavity. Locking mechanisms are also tailored to produce the right sort of click and the way seals make contact is precisely controlled. On average it takes 1.8 seconds to close a car door but in that time you’re witnessing a strange kind of symphony composed by engineers and designers whose goal is to reassure you that its rock solid. They mention lock mechanisms, something you may never have thought about. A friend of mine had a Ford Focus some years ago and that particular model had electric locks that, instead of giving a satisfying thunk or solid click, made this horrible gates-of-the-prison-buzzing sound that was completely unnerving. Hit up the link below to see how sounds are engineered for car doors, electric motors, ATM machines, and more. 5 Fake Sounds Designed to Help Humans [Humans Invent via Boing Boing] How To Easily Access Your Home Network From Anywhere With DDNSHow To Recover After Your Email Password Is CompromisedHow to Clean Your Filthy Keyboard in the Dishwasher (Without Ruining it)

    Read the article

  • Who should control navigation in an MVVM application?

    - by SonOfPirate
    Example #1: I have a view displayed in my MVVM application (let's use Silverlight for the purposes of the discussion) and I click on a button that should take me to a new page. Example #2: That same view has another button that, when clicked, should open up a details view in a child window (dialog). We know that there will be Command objects exposed by our ViewModel bound to the buttons with methods that respond to the user's click. But, what then? How do we complete the action? Even if we use a so-called NavigationService, what are we telling it? To be more specific, in a traditional View-first model (like URL-based navigation schemes such as on the web or the SL built-in navigation framework) the Command objects would have to know what View to display next. That seems to cross the line when it comes to the separation of concerns promoted by the pattern. On the other hand, if the button wasn't wired to a Command object and behaved like a hyperlink, the navigation rules could be defined in the markup. But do we want the Views to control application flow and isn't navigation just another type of business logic? (I can say yes in some cases and no in others.) To me, the utopian implementation of the MVVM pattern (and I've heard others profess this) would be to have the ViewModel wired in such a way that the application can run headless (i.e. no Views). This provides the most surface area for code-based testing and makes the Views a true skin on the application. And my ViewModel shouldn't care if it displayed in the main window, a floating panel or a child window, should it? According to this apprach, it is up to some other mechanism at runtime to 'bind' what View should be displayed for each ViewModel. But what if we want to share a View with multiple ViewModels or vice versa? So given the need to manage the View-ViewModel relationship so we know what to display when along with the need to navigate between views, including displaying child windows / dialogs, how do we truly accomplish this in the MVVM pattern?

    Read the article

  • NDepend 4.0 Released

    - by Anthony Trudeau
    Last week version 4.0 of NDepend was released.  NDepend is a Visual Studio add-in designed for intense code analysis with the goal of high quality code.  A month ago I wrapped up my evaluation of the previous version of NDepend. The new version contains many minor changes, several bug fixes, and adds about 50 new code rules.  The version also adds support for Visual Studio 11, .NET Framework 4.5, and SilverLight 5.0.  But, the biggest change was the shift from CQL to CQLinq. Introducing CQLinq The latest version replaces the CQL rules language with CQLinq (CQL is still an option although the editor is buried).  As you might guess CQLinq is a flavor of Linq designed specifically for the code rules. The best way to illustrate the differences is with an example.  I used the following CQL example in Part 3 of my review: WARN IF Count > 0 IN SELECT TYPES WHERE IsInterface AND !NameLike “I” This same query looks like this when implemented in CQLinq: warnif count > 0 from t in Types where t.IsInterface == true && !t.NameLike(“I”) select t I like the syntax and it is a natural fit, but I found writing the queries frustrating in the Queries and Rules Edit window.  The Queries and Rules Edit window replaces the CQL Query Edit window.  The new editor has the same style of Intellisense as the previous editor.  However, it has a few annoyances.  The error indicator is a red block.  It has the tendency of obscuring your cursor.  Additionally, writing CQLing queries is like writing plain old Linq queries, so the fact that the editor uses Enter to select from Intellisense instead of Tab is jarring.  These issues can be an obstacle to writing queries quickly.CQLinq makes it possible to write rules that weren't possible before.  Additionally, a JustMyCode domain is now possible making it easy to eliminate generated code from the analysis.Should you Buy? I recommend NDepend overall.  It has some rough points for me that I have detailed in my earlier evaluation (starting here).  But, it’s definitely worth the money.  The bigger question is: should I pay for the upgrade to 4.0?  At this point I’m on the fence, but I would go for it if you need support for Visual Studio 2011, .NET Framework 4.5, or Silverlight 5.0; or if you need one of the many rules that weren't possible before CQLinq. Disclaimer: Patrick Smacchia contacted me about reviewing NDepend. I received a free license in return for sharing my experiences and talking about the capabilities of the add-in on this site. There is no expectation of a positive review elicited from the author of NDepend. Resources: NDepend Release Notes

    Read the article

  • Why is my dual-boot Ubuntu partition showing up as a peripheral "root.disk"?

    - by Don
    I recently installed Ubuntu 12.04, which I had been booting from a usb key, as a dual-boot on my machine running Windows 7. From what I had read online while researching, I was prepared to have to shrink the Windows partition and all that. But I never had to - it really was just a few clicks here and there and it was installed. I'm still pretty confused about it, but whatever, it worked, and the two peacefully coexist on my machine, and I have broken things to fix before I worry about fixing unbroken things. So yesterday I got it in my head to look at my partitions (I was considering making an all new partition to install the Windows 8 Release Preview). What I saw confused me. Here's a screenshot of the disk utility. At this moment, there is nothing connected to my computer, and nothing in any of the optical drives/ports/card readers/etc. Can you help me figure out what's going on here? Don's Machine is, I believe, my Windows partition - that's the name I assigned my machine from Windows Explorer. PQSERVICE is from what I can find online also Windows, but having to do with backup. And SYSTEM REQUIRED, if I browse it in Ubuntu, is definitely something to do with booting, and I believe it is also Windows'. According to the sizes shown, those three together should use up my 500 GB HD. Then further down, as a "peripheral device", it lists that 31 GB disk. This is obviously my Ubuntu (Model:Linux Loop:root.disk), but why is it showing up as a peripheral? So, to sum up those questions and to add some more random ones I had: Why is Ubuntu showing up as a peripheral device? If the Windows sections take up all 500 GB, where does Ubuntu live? If I renamed the disk partitions, would my life become a nightmare (seriously - can I safely rename them)? Why didn't I have to resize the Windows partition in the first place? Would giving Ubuntu more space improve its performance (it hangs alot)? Is it possible to have a partition for each OS (Windows 7 & 8, Ubuntu), a partition for files, and a separate partition for backups? Is this towards the good or bad idea end of the spectrum? @Elfy, would that explain why it keeps hanging? I guess I'll backup my files, rip it out, and reinstall it correctly later on today.

    Read the article

  • A brief note for customers running SOA Suite on AIX platforms

    - by christian
    When running Oracle SOA Suite with IBM JVMs on the AIX platform, we have seen performance slowdowns and/or memory leaks. On occasion, we have even encountered some OutOfMemoryError conditions and the concomittant Java coredump. If you are experiencing this issue, the resolution may be to configure -Dsun.reflect.inflationThreshold=0 in your JVM startup parameters. https://www.ibm.com/developerworks/java/library/j-nativememory-aix/ contains a detailed discussion of the IBM AIX JVM memory model, but I will summarize my interpretation and understanding of it in the context of SOA Suite, below. Java ClassLoaders on IBM JVMs are allocated a native memory area into which they are anticipated to map such things as jars loaded from the filesystem. This is an excellent memory optimization, as the file can be loaded into memory once and then shared amongst many JVMs on the same host, allowing for excellent horizontal scalability on AIX hosts. However, Java ClassLoaders are not used exclusively for loading files from disk. A performance optimization by the Oracle Java language developers enables reflectively accessed data to optimize from a JNI call into Java bytecodes which are then amenable to hotspot optimizations, amongst other things. This performance optimization is called inflation, and it is executed by generating a sun.reflect.DelegatingClassLoader instance dynamically to inject the Java bytecode into the virtual machine. It is generally considered an excellent optimization. However, it interacts very negatively with the native memory area allocated by the IBM JVM, effectively locking out memory that could otherwise be used by the Java process. SOA Suite and WebLogic are both very large users of reflection code. They reflectively use many code paths in their operation, generating lots of DelegatingClassLoaders in normal operation. The IBM JVM slowdown and subsequent OutOfMemoryError are as a direct result of the Java memory consumed by the DelegatingClassLoader instances generated by SOA Suite and WebLogic. Java garbage collection runs more frequently to try and keep memory available, until it can no longer do so and throws OutOfMemoryError. The setting sun.reflect.inflationThreshold=0 disables this optimization entirely, never allowing the JVM to generate the optimized reflection code. IBM JVMs are susceptible to this issue primarily because all Java ClassLoaders have this native memory allocation, which is shared with the regular Java heap. Oracle JVMs don't automatically give all ClassLoaders a native memory area, and my understanding is that jar files are never mapped completely from shared memory in the same way as IBM does it. This results in different behaviour characteristics on IBM vs Oracle JVMs.

    Read the article

< Previous Page | 829 830 831 832 833 834 835 836 837 838 839 840  | Next Page >