Search Results

Search found 15099 results on 604 pages for 'runtime environment'.

Page 450/604 | < Previous Page | 446 447 448 449 450 451 452 453 454 455 456 457  | Next Page >

  • Still prompted for a password after adding SSH public key to a server

    - by Nathan Arthur
    I'm attempting to setup a git repository on my Dreamhost web server by following the "Setup: For the Impatient" instructions here. I'm having difficulty setting up public key access to the server. After successfully creating my public key, I ran the following command: cat ~/.ssh/[MY KEY].pub | ssh [USER]@[MACHINE] "mkdir ~/.ssh; cat >> ~/.ssh/authorized_keys" ...replacing the appropriate placeholders with the correct values. Everything seemed to go through fine. The server asked for my password, and, as far as I can tell, executed the command. There is indeed a ~/.ssh/authorized_keys file on the server. The problem: When I try to SSH into the server, it still asks for my password. My understanding is that it shouldn't be asking for my password anymore. What am I missing? EDIT: SSH -v Log: Macbook:~ michaeleckert$ ssh -v [USER]@[SERVER URL] OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: /etc/ssh_config line 53: Applying options for * debug1: Connecting to [SERVER URL] [[SERVER IP]] port 22. debug1: Connection established. debug1: identity file /Users/michaeleckert/.ssh/id_rsa type -1 debug1: identity file /Users/michaeleckert/.ssh/id_rsa-cert type -1 debug1: identity file /Users/michaeleckert/.ssh/id_dsa type -1 debug1: identity file /Users/michaeleckert/.ssh/id_dsa-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.2 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.5p1 Debian-6+squeeze3 debug1: match: OpenSSH_5.5p1 Debian-6+squeeze3 pat OpenSSH_5* debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA [STRING OF NUMBERS AND LETTERS SEPARATED BY SEMI-COLONS] debug1: Host ‘[SERVER URL]' is known and matches the RSA host key. debug1: Found key in /Users/michaeleckert/.ssh/known_hosts:2 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /Users/michaeleckert/.ssh/id_rsa debug1: Trying private key: /Users/michaeleckert/.ssh/id_dsa debug1: Next authentication method: password [USER]@[SERVER URL]'s password: debug1: Authentication succeeded (password). Authenticated to [SERVER URL] ([[SERVER IP]]:22). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. debug1: Sending environment. debug1: Sending env LANG = en_US.UTF-8 Welcome to [SERVER URL] Any malicious and/or unauthorized activity is strictly forbidden. All activity may be logged by DreamHost Web Hosting. Last login: Sun Nov 3 12:04:21 2013 from [MY IP] [[SERVER NAME]]$

    Read the article

  • Paranoid management, contractor checking work [closed]

    - by user833345
    Just wanted to get some opinions and experiences on an issue I'm having at work. First, a little background. I've been working at a company for some time (past any probation periods) and rewriting a horrendous system. No tests, incomplete and broken functionality everywhere, enough copypasta to feed a small village, redundant code, more unused SQL tables than used ones and terrible performance. I've never seen such bad code, pretty much all of it is worthy of being posted on TheDailyWTF. The company has been operating for a number of years and have had a string of bad developers working on this system. I made a call on rewriting instead of refactoring since I judged it to be less work overall and decided that the result will address the requirements more appropriately, since the central requirement is to have a future-proof system for the next decade with plenty of room to scale up. Refactoring would have entailed untangling a huge ball of yarn and at the same time integrating it with a proper foundation or building a foundation from scratch. I've introduced the latest spiffy framework, unit & functional testing, CI, a bug tracker and agile workflow to the environment. I've fixed most of the performance issues of the old system (there were no indexes on any of the tables, for example). I've created an automated deployment process for the old system. The CTO has been maintaining the old system while I have been building the new one and he has been advising management that everything is being done as per best practices. However, management is hiring a contractor to come in and verify my work. In my experience, this is unprecedented. I can understand their reasoning to an extent, since they've had bad luck in the past, but can't help but feel somewhat offended at the fact that they distrust two senior developers who have been working with them for some time enough that a third party is being brought in. And it's not just me who is under watch - people's emails are constantly checked, someone had a remote desktop application installed on their computer of which I was asked to check the usage logs to try to determine if they were stealing sensitive data and there are CCTV cameras in one of the rooms. It's the first time I've decided to disable my Skype history at work. Am I right to feel indignant here? Has anyone else ever encountered such a situation? If so, how did it work out in the end? Was it worth sticking around? Should I just find another job?

    Read the article

  • My .NET Technology picks for 2011

    - by shiju
    My Technology predictions for 2011 Cloud computing and Mobile application development will be the hottest trends for 2011. I hope that Windows Azure will be very hot in year 2011 and lot of cloud computing adoption will be happen with Windows Azure on 2011. Web application scalability will be the big challenge for Architects in the next year and architecture approaches like CQRS will get some attention on next year. Architects will look on different options for web application scalability and adoption of NoSQL and Document databases will be more in the year 2011. The following are the my technology picks for .Net stack Windows Azure Windows Azure will be one of the hottest technologies of 2011. Adoption of Cloud and Windows Azure will get big attention on next year. The Windows Azure platform is a flexible cloud–computing platform that lets you focus on solving business problems and addressing customer needs. No need to invest upfront on expensive infrastructure. Pay only for what you use, scale up when you need capacity and pull it back when you don’t. We handle all the patches and maintenance — all in a secure environment with over 99.9% uptime. Silverlight 5 Silverlight is becoming a common technology for variety of development platforms. You can develop Silverlight applications for web, desktop and windows phone. The new Silverlight 5 beta will be available during the starting quarter of the next year with new capabilities and lot of new features. Silverlight 5 will be powerful development platform for both web-based business apps and rich media solutions. We can expect final version of Silverlight 5 on end of 2011. Windows Phone 7 Development Tools Mobile application development will be very hot in year 2011 and Windows Phone 7 will be one of the hottest technologies of next year. You can get introduction on Windows Phone 7 Development Tools from somasegar’s blog post and MSDN documentation available from here. EF Code First I am a big fan of Entity Framework’s Code First approach and hope that Code First approach will attract more people onto Entity Framework 4. EF Code First lets you focus on domain model which will enable Domain-Driven Development for applications. I hope that DDD fans will love the EF Code First approach. The Entity Framework 4 now supports three types of approaches and these will attract different types of developer audience. ASP.NET MVC 3 The ASP.NET MVC 3 will be the hottest technology of Microsoft web stack on the next year. ASP.NET developers will widely move to the ASP.NET MVC Framework from their WebForms development. The new Razor view engine is great and it will increase the adoption of ASP.NET MVC 3. Razor the will improve the productivity when working with ASP.NET MVC 3 Views. You can build great web applications using ASP.NET MVC 3 and jQuery with better maintainability, generation of clean HTML and even better performance. In my opinion, the best technology stack for web development is ASP.NET MVC 3 and Entity Framework 4 Code First as ORM. On the next year, you can expect more articles from my blog on ASP.NET MVC 3 and Entity Framework 4 Code First. RavenDB NoSQL and Document databases will get more attention on the coming year and RavenDB will be the most notable document database in the .NET stack. RavenDB is an Open Source (with a commercial option) document database for the .NET/Windows platform developed by Ayende Rahien. RavenDB is .NET focused document database which comes with a fully functional .NET client API and supports LINQ. I have written few articles on RavenDB and you can read it from here. Managed Extensibility Framework (MEF) Many people didn't realized the power of MEF. The MEF lets you create extensible applications and provides a great solution for the runtime extensibility problem. I hope that .NET developers will more adopt the MEF on the next year for their .NET applications. You can get an excellent introduction on MEF from Anoop Madhusudanan’s blog post MEF or Managed Extensibility Framework – Creating a Zoo and Animals

    Read the article

  • What's Bringing SharePoint 2007 Server to a hault?

    - by juanlarios
    I've been having issues with my teste environment and I'm hoping someone has run into this problem and can point me in the right direction. I noticed: SharePoint Server Memory is through the roof at times and so is the CPU usage. Most of CPU usage is a sql proccess. Running out of disk space all the time. I looked in the Logs located in the 12 hive and sure enough I have 1G log files that are hard to open because of the size. The following are the 3 error messages that are flooding my SharePoint logs:   04/05/2010 16:02:36.99     OWSTIMER.EXE (0x0B94)                       0x0BA4    Windows SharePoint Services       Timer                             5uuf    Monitorable    The previous instance of the timer job 'Variations Propagate Page Job Definition', id '{F9A73EB4-90FE-4574-AD99-B4034056F915}' for service '{F89169F9-707B-4588-9ED0-E6D399FE5E3D}' is still running, so the current instance will be skipped.  Consider increasing the interval between jobs.    04/05/2010 15:59:51.51     OWSTIMER.EXE (0x0B94)                       0x0BA4    Windows SharePoint Services       Timer                             5uuf    Monitorable    The previous instance of the timer job 'Profile Synchronization', id '{A05E3439-8DCD-449A-9D9E-46D601CACAA2}' for service '{F89169F9-707B-4588-9ED0-E6D399FE5E3D}' is still running, so the current instance will be skipped.  Consider increasing the interval between jobs.     04/05/2010 15:56:25.53     OWSTIMER.EXE (0x0B94)                       0x0BA4    Windows SharePoint Services       Timer                             5uuf    Monitorable    The previous instance of the timer job 'Scheduled Unpublish', id '{6298F93F-388D-46B9-809E-CEDBB8659661}' for service '{F89169F9-707B-4588-9ED0-E6D399FE5E3D}' is still running, so the current instance will be skipped.  Consider increasing the interval between jobs.     04/05/2010 15:54:14.73     OWSTIMER.EXE (0x0B94)                       0x0BA4    Windows SharePoint Services       Timer                             5uuf    Monitorable    The previous instance of the timer job 'Config Refresh', id '{C42DA970-3DA3-4AA2-94E5-8499C5B80A3E}' for service '{7F6D2CBE-8071-4A30-B313-7C9989FC2D87}' is still running, so the current instance will be skipped.  Consider increasing the interval between jobs.       I'm googling around but haven't found much. I know one other person posted something about this back in 2008, but no answers were reached. I have already checked the databases to see if any of them have gone offline for whatever reason, but from SQL everything is fine. I recently re-created an SSP and deleted an old ssp. So I thought maybe that was causing it, and who knows? maybe that causes some of the problems or maybe all. I'm running configuration wizard and see if anything changes. Please if someone has had similar issues let me know.

    Read the article

  • Are closures with side-effects considered "functional style"?

    - by Giorgio
    Many modern programming languages support some concept of closure, i.e. of a piece of code (a block or a function) that Can be treated as a value, and therefore stored in a variable, passed around to different parts of the code, be defined in one part of a program and invoked in a totally different part of the same program. Can capture variables from the context in which it is defined, and access them when it is later invoked (possibly in a totally different context). Here is an example of a closure written in Scala: def filterList(xs: List[Int], lowerBound: Int): List[Int] = xs.filter(x => x >= lowerBound) The function literal x => x >= lowerBound contains the free variable lowerBound, which is closed (bound) by the argument of the function filterList that has the same name. The closure is passed to the library method filter, which can invoke it repeatedly as a normal function. I have been reading a lot of questions and answers on this site and, as far as I understand, the term closure is often automatically associated with functional programming and functional programming style. The definition of function programming on wikipedia reads: In computer science, functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It emphasizes the application of functions, in contrast to the imperative programming style, which emphasizes changes in state. and further on [...] in functional code, the output value of a function depends only on the arguments that are input to the function [...]. Eliminating side effects can make it much easier to understand and predict the behavior of a program, which is one of the key motivations for the development of functional programming. On the other hand, many closure constructs provided by programming languages allow a closure to capture non-local variables and change them when the closure is invoked, thus producing a side effect on the environment in which they were defined. In this case, closures implement the first idea of functional programming (functions are first-class entities that can be moved around like other values) but neglect the second idea (avoiding side-effects). Is this use of closures with side effects considered functional style or are closures considered a more general construct that can be used both for a functional and a non-functional programming style? Is there any literature on this topic? IMPORTANT NOTE I am not questioning the usefulness of side-effects or of having closures with side effects. Also, I am not interested in a discussion about the advantages / disadvantages of closures with or without side effects. I am only interested to know if using such closures is still considered functional style by the proponent of functional programming or if, on the contrary, their use is discouraged when using a functional style.

    Read the article

  • BizTalk: Using context for routing

    - by Leonid Ganeline
    [See Sample: Context routing and throttling with orchestration] Imagine the project where most of the routing happens between orchestrations. I.e. routing is mostly between the MessageBox and orchestration with direct endpoints. Imagine also the most of the messages are with the same Message type. Usually in this case messages got the special node only for the routing. For example, the field can be the “Originator” or “Recipient” or “From” or “To”. What wrong is with this approach, it creates the dependency between the message and the message processing. Message “knows” something about Originator or Recipient. So what we can do with it? How can we “colorize” the same message to route it to the different places without changing the message itself? One of the decisions is to use the message context. BizTalk uses the promoted properties for routing.  There are two kinds of the properties: the content properties and the context properties. The content property extracts its value from inside the message, it is a value of the element or attribute. [See MSDN] The context property gets its value from the message environment. It can be the port name that receive this message, it can be the message Id, created by the BizTalk. Context properties look like the headers in the SOAP message. Actually they are not the headers but behave like headers. The context properties are the good match for our case. First, we don’t have to change the message itself to set or change the routing property. The context is stored outside the message body. Second, we don’t have to create the property schema to use the context properties. [See MSDN: How to create Property schema] BizTalk has the predefined schema set for the context properties. [See MSDN: Message Context Properties] Use one of them and that's it. The main purpose of the context properties is working on behalf of the BizTalk internals. But we can read, create and change them. Just do not interfere with BizTalk internals on this way.

    Read the article

  • WebCenter Content Web Search Performance: Do you really need that folder path info?

    - by Nicolas Montoya
    End-users want content at their fingertips at the speed of thought if possible. When running search operations in the WebCenter Conter Web Interface every second or fraction of a second improvement does matter. When doing some trace analysis on the systemdatabase tracing on a customer environment, we came across some SQL queries that were unnecessarily being triggered! These were related to determining the folder path for every entry part of the search result set. However, this folder path was not even being used as part of the displayed information in the user interface.Why was the folder path information being collected when it was not even displayed in the UI? We found that the configuration parameter 'FolderPathInSearchResults' was set to 'true' under Administration > Admin Server > General Configuration > Additional Configuration Variables as shown below:When executing a quicksearch by keyword we were getting 100 out of 2280 entries in the first page of the result set.When thera 'FolderPathInSearchResults' configuration parameter is set to 'true', the following queries appear in the systemdatabase tracing:100 executions for a query on the FolderFiles table for each of the documents displayed in the first page:>systemdatabase/6       12.13 11:17:48.188      IdcServer-199   1.45 ms. SELECT * FROM FolderFiles WHERE dDocName='SLC02VGVUSORAC140641' AND fLinkRank=0[Executed. Returned row(s): true]382 executions for a query of the folders tables - most of the documents that match the keyword criteria are at a folder depth level of three or four:>systemdatabase/6       12.13 11:17:48.114      IdcServer-199   2.57 ms. SELECT FolderFolders.*,FolderMetaDefaults.* FROM FolderFolders,FolderMetaDefaults WHERE FolderFolders.fFolderGUID=FolderMetaDefaults.fFolderGUID(+) AND((FolderFolders.fFolderGUID = '1EB8E527E19B09ED3FE82EE310AEA13A' ) )[Executed.Returned row(s): true]By setting this 'FolderPathInSearchResults' configuration parameter to 'false', the above queries were no longer reported in the Server Output System Audit Information.Now, let's consider a practical scenario:Search result set page = 100Average folder depth der document in the search result set: 5The number of folder path related queries will be: 100 + 5*500 = 600If each query takes slightly over 3 ms. You would have 2000 ms (2 seconds) spent in server time to get this information.The overall performance impact goes beyond seerver time execution, as this information needs to travel from the server to the browser. If the documents are further nested into the folder hierarchy, additional hundreds of queries may be executed. If folder path is not being displayed in the end-user interface profile, your system may be better of with the 'FolderPathInSearchResults' configuration parameter disabled.

    Read the article

  • 5 Step Procedure for Android Deployment with NetBeans IDE

    - by Geertjan
    I'm finding that it's so simple to deploy apps to Android that I'm not needing to use the Android emulator at all, haven't been able to figure out how it works anyway (big blinky screen pops up that I don't know what to do with). I just simply deploy the app straight to Android, try it out there, and then uninstall it, if needed. The whole process (only step 4 and 5 below need to be done for each deployment iteration, after you've done steps 1, 2, and 3 once to set up the deployment environment), takes a few seconds. Here's what I do: On Android, go to Settings | Applications. Check "Unknown sources". In "Development", check "USB debugging". Connect Android to your computer via a USB cable. Start up NetBeans IDE, with NBAndroid installed, as described yesterday. and create your "Hello World" app. Right-click the project in the IDE and choose "Export Signed Android Package". Create a new keystore, or choose an existing one, via the wizard that appears. At the end of the wizard (would be nice if NBAndroid would let you set up a keystore once and then reuse it for all your projects, without needing to work through the whole wizard step by step each time), you'll have a new release APK file (Android deployment archive) in the project's 'bin' folder, which you can see in the Files window. Go to the command line (would be nice if NBAndroid were to support adb, would mean I wouldn't need the command line at all), browse to the location of the APK file above. Type "adb install helloworld-release.apk" or whatever the APK file is called. You should see a "Success" message in the command line. Now the application is installed. On your Android, go to "Applications", and there you'll see your brand new app. Then try it out there and delete it if you're not happy with it. After you've made a change in your app, simply repeat step 4 and 5, i.e., create a new APK and install it via adb. Step 4 and 5 take a couple of seconds. And, given that it's all so simple, I don't see the value of the Android emulator, at all.

    Read the article

  • How to run software, that is not offered though package managers, that requires ia32-libs

    - by Onno
    I'm trying to install the Arma 2 OA dedicated server on a Virtualbox VM so I can test my own missions in a sandbox environment in a way that lets me offload them to another computer in my network. (The other computer is running the VM, but it's a windows machine, and I didn't want to hassle with its installation) It needs at least 2, and preferably 4GB of ram, so I thought I would install the AMD64 version of ubuntu 13.10 to get this going. 'How do you run a 32-bit program on a 64-bit version of Ubuntu?' already explained how to install 32bit software though apt-get and/or dpkg, but that doesn't apply in this case. The server is offered as a compressed download on the site of BI Studio, the developer of the Arma games. Its installation instructions are obviously slightly out of date with the current state of the art. (probably because the state of the art has been updated quite recently :) ) It states that I have to install ia32-libs, which has now apparently been deprecated. Now I have to find out how to get the right packages installed to make sure that it will run. My experience level is like novice-intermediate when it comes to these issues. I've installed a lot of packages though apt-get; I've solved dependency issues in the past; I haven't installed much software without using package managers. I can handle myself with basic administrative work like editing conf files and such. I have just gone ahead and tried to install it without installing ia32-libs through apt-get but to install gcc to get the libs after all. My reasoning being that gcc will include the files for backward compatibility coding and on linux all libs are (as far as I can tell) installed at a system level in /libs . So far it seems to start up. (I can connect with the game server trough my in-game network browser, so it's communicating) I'm not sure if there's any dependency checking going on when running the game server program, so I'm left with a couple of questions: Does 13.10 catch any calls to ia32libs libraries and translate the calls to the right code on amd64? If it runs, does that mean that all required libraries have been loaded correctly, or is there a change of it crashing later on when a library that was needed is missing after all? Is it necessary to do a workaround such as installing gcc? How do I find out what libraries I might need to run this software? (or any other piece of 32-bit software that isn't offered through a package manager)

    Read the article

  • What arguments can I use to "sell" the BDD concept to a team reluctant to adopt it?

    - by S.Robins
    I am a bit of a vocal proponent of the BDD methodology. I've been applying BDD for a couple of years now, and have adopted StoryQ as my framework of choice when developing DotNet applications. Even though I have been unit testing for many years, and had previously shifted to a test-first approach, I've found that I get much more value out of using a BDD framework, because my tests capture the intent of the requirements in relatively clear English within my code, and because my tests can execute multiple assertions without ending the test halfway through - meaning I can see which specific assertions pass/fail at a glance without debugging to prove it. This has really been the tip of the iceberg for me, as I've also noticed that I am able to debug both test and implementation code in a more targeted manner, with the result that my productivity has grown significantly, and that I can more easily determine where a failure occurs if a problem happens to make it all the way to the integration build due to the output that makes its way into the build logs. Further, the StoryQ api has a lovely fluent syntax that is easy to learn and which can be applied in an extraordinary number of ways, requiring no external dependencies in order to use it. So with all of these benefits, you would think it an easy to introduce the concept to the rest of the team. Unfortunately, the other team members are reluctant to even look at StoryQ to evaluate it properly (let alone entertain the idea of applying BDD), and have convinced each other to try and remove a number of StoryQ elements from our own core testing framework, even though they originally supported the use of StoryQ, and that it doesn't impact on any other part of our testing system. Doing so would end up increasing my workload significantly overall and really goes against the grain, as I am convinced through practical experience that it is a better way to work in a test-first manner in our particular working environment, and can only lead to greater improvements in the quality of our software, given I've found it easier to stick with test first using BDD. So the question really comes down to the following: What arguments can I use to really drive the point home that it would be better to use StoryQ, or at the very least apply the BDD methodology? Can you point me to any anecdotal evidence that I can use to support my argument to adopt BDD as our standard method of choice? What counter arguments can you think of that could suggest that my wish to convert the team efforts to BDD might be in error? Yes, I'm happy to be proven wrong provided the argument is a sound one. NOTE: I am not advocating that we rewrite our tests in their entirety, but rather to simply start working in a different manner for all future testing work.

    Read the article

  • Understanding the levels of computing

    - by RParadox
    Sorry, for my confused question. I'm looking for some pointers. Up to now I have been working mostly with Java and Python on the application layer and I have only a vague understanding of operating systems and hardware. I want to understand much more about the lower levels of computing, but it gets really overwhelming somehow. At university I took a class about microprogramming, i.e. how processors get hard-wired to implement the ASM codes. Up to now I always thought I wouldn't get more done if learned more about the "low level". One question I have is: how is it even possible that hardware gets hidden almost completely from the developer? Is it accurate to say that the operating system is a software layer for the hardware? One small example: in programming I have never come across the need to understand what L2 or L3 Cache is. For the typical business application environment one almost never needs to understand assembler and the lower levels of computing, because nowadays there is a technology stack for almost anything. I guess the whole point of these lower levels is to provide an interface to higher levels. On the other hand I wonder how much influence the lower levels can have, for example this whole graphics computing thing. So, on the other hand, there is this theoretical computer science branch, which works on abstract computing models. However, I also rarely encountered situations, where I found it helpful thinking in the categories of complexity models, proof verification, etc. I sort of know, that there is a complexity class called NP, and that they are kind of impossible to solve for a big number of N. What I'm missing is a reference for a framework to think about these things. It seems to me, that there all kinds of different camps, who rarely interact. The last few weeks I have been reading about security issues. Here somehow, much of the different layers come together. Attacks and exploits almost always occur on the lower level, so in this case it is necessary to learn about the details of the OSI layers, the inner workings of an OS, etc.

    Read the article

  • The Minimalist's Approach to Content Governance

    - by Kellsey Ruppel
    This week on the blog, we want to focus on the content lifecylce and how important it is to have the tools in place to be able to properly manage all te phases of the content lifecylce. John Brunswick has some great advice when it comes to this topic, so expect to hear a lot from him this week! Originally posted by John Brunswick. Let's be honest - content governance is far from an exciting topic. BUT the potential of a very small intranet team creating and maintaining a platform that provides an organization with relevant, high value information, helping workers to get their jobs done with greater accuracy and in less time is exciting. It is easy to quickly start producing content, but the challenge is ensuring that the environment is easy to navigate and use on the third week and during the third year.   What can be done to bridge this gap? Over the next few blog entries let's take a pragmatic, minimalistic view of a process that can help any team manage a wealth of unstructured information. Based on an earlier article that I wrote around Portal Governance, I am going to focus on using technology as much as possible to support the governance of content with minimal involvement from users. The only certainty about content production is that business users are not fans of maintaining content. Maintenance is overhead and is a long-term investment thats value will possibly not be realized under the current content creator's watch. To add context to how we will use technical tools in this process, each post will highlight one section of the content lifecycle process as outlined below Content Lifecycle Stages 1. Request - Understand the education, purpose, resource and success criteria for content 2. Create - Determine access and workflow for content 3. Manage - Understand ownership and review cycles 4. Retire - Act on thresholds established during the request stage Within each state we will also elaborate as to 1. Why - why would we entertain doing this? 2. How - the steps that are needed to make it happen 3. Impact - what is the net benefit or loss based on the process Over the course of this week, we will dive deep into the stages and the minimal amount of time, effort and process within each to make some meaningful gains in the improvement of user experience and productivity in their search for information. It might be a stretch to say that we can make content governance exciting, but hopefully it can end up being painless and paying dividends. And if you'd like to hear first hand from a customer that is managing their content lifecycle with Oracle WebCenter, be sure to join us on Wednesday for this webcast "ResCare Solves Content Lifecycle Challenges with Oracle WebCenter"!

    Read the article

  • Keeping your options open in a cloud solution

    - by BuckWoody
    In on-premises solutions we have the full range of options open for a given computing solution – but we don’t always take advantage of them, for multiple reasons. Data goes in a Relational Database Management System, files go on a share, and e-mail goes to the Exchange server. Over time, vendors (including ourselves) add in functionality to one product that allow non-standard use of the platform. For example, SQL Server (and Oracle, and others) allow large binary storage in or through the system – something not originally intended for an RDBMS to handle. There are certainly times when this makes sense, of course, but often these platform hammers turn every problem into a nail. It can make us “lazy” in our design – we sometimes don’t take the time to learn another architecture because the one we’ve spent so much time with can handle what we want to do. But there’s a distinct danger here. In nature, when a population shares too many of the same traits, it can cause a complete collapse if a situation exploits a weakness shared by that population. The same is true with not using the righttool for the job in a computing environment. Your company or organization depends on your knowledge as a professional to select the best mix of supportable, flexible, cost-effective technologies to solve their problems, whether you’re in an architect role or not.  So take some time today to learn something new. The way I do this is to select a given problem, and try to solve it with a technology I’m not familiar with. For instance – create a Purchase Order system in Excel, then in Hadoop or MongoDB, or even in flat-files using PowerShell as an interface. No, I’m not suggesting any of these architectures are the proper way to solve the PO problem, but taking something concrete that you know well and applying that meta-knowledge to another platform will assist you in exercising the “little grey cells” and help you and your organization understand what is open to you. And of course you can do all of this on-premises – but my recommendation is to check out a cloud platform (my suggestion would of course be Windows Azure :) ) and try it there. Most providers (including Microsoft) provide free time to do that.

    Read the article

  • The Connected Company: WebCenter Portal Activity Streams

    - by Michael Snow
    Guest post by Mitchell Palski, Oracle Staff Sales Consultant Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Social media is sure to have made its way into your company or government organization. Whether its discussion threads, blog posts, Facebook-style profile-pages, or just a simple Instant Messenger application; in one way or another, your employees are connected. What are the objectives of leveraging social media in your organization? Facilitating knowledge transfer More effectively organizing team events Generating inter-community discussions to solve problems Improving resource management Increasing organizational awareness Creating an environment of accountability Do any of the business objectives above stand out to you as needs? If so, consider leveraging the WebCenter Portal Activity Stream as part of your solution. In WebCenter Portal, the Activity Stream feature provides a streaming view of the activities of your connections, actions taken in portals, and business activities that looks a lot like a combined Facebook and Twitter newsfeed. Activity Stream can note when a user: Posts feedback (comments) Uploads a document Creates a new blog, page, event, or announcement Starts a new discussion Streams messages and attachments entered through WebCenter Publisher (similar to Twitter) Through Activity Stream Preferences, you can select which of these activities to show or hide from your personal Activity Stream. Here’s what you get: Real-time stream of activities with in a Portal or sub-Portal increases awareness across your organization or within a working group Complete list of user actions reduces the time-to-find for users that need to interact with the latest activities in your portal Users can publish to their groups when tasks are finished for complete group traceability and accountability, as well as improved resource management. Project discussions and shared documents that require the expertise of someone outside of a working group now get increased visibility across your organization. There’s a reason that commercial Social Media tools like Facebook and Twitter have been so successful – they spread information in an aesthetically appealing and easy to read format.  Strategically placing an Activity Feed within your Portal is analogous to sending your employees a daily newsletter, events calendar, recent documents report, and list of announcements – BUT ALL IN ONE! 

    Read the article

  • Showing ZFS some LOVE

    - by Kristin Rose
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} L is for the way you look at us, and O because we’re Oracle, but V is very, very, extra ordinary, and E, well that’s obvious… E is because Oracle’s new Sun ZFS Storage Appliance is Excellent, and here at OPN, we like spell out the obvious!  If you haven’t already heard, the Sun ZFS Appliance has “A simple, GUI-driven setup and configuration, solid price-performance and world-class Oracle support behind it. The CRN Test Center recommends the Sun ZFS Storage”. Read more about what CRN said here. Oracle's Sun ZFS Appliance family delivers enterprise-class network attached storage (NAS) capabilities with leading Oracle integration, simplicity, efficiency, performance, and TCO.  The systems offer an easy way to manage and expand your storage environment at a lower cost, with more efficiency, better data integrity, and higher performance when compared with competitive NAS offerings. Did we mention that set up, including configuring, will take you less than an hour since it all comes in one box and is so darn simple to use? So if you L-O-V-E what you’re hearing about Oracle’s Sun Z-F-S, learn more by watching the video below, and visiting any of our available resources . It Had to Be You, The OPN Communications Team

    Read the article

  • Adding JavaScript to your code dependent upon conditions

    - by DavidMadden
    You might be in an environment where you code is source controlled and where you might have build options to different environments.  I recently encountered this where the same code, built on different configurations, would have the website at a different URL.  If you are working with ASP.NET as I am you will have to do something a bit crazy but worth while.  If someone has a more efficient solution please share. Here is what I came up with to make sure the client side script was placed into the HEAD element for the Google Analytics script.  GA wants to be the last in the HEAD element so if you are doing others in the Page_Load then you should do theirs last. The settings object below is an instance of a class that holds information I collection.  You could read from different sources depending on where you stored your unique ID for Google Analytics. *** This has been formatted to fit this screen. *** if (!IsPostBack) { if (settings.GoogleAnalyticsID != null || settings.GoogleAnalyticsID != string.Empty) { string str = @"//<!CDATA[ var _gaq = _gaq || []; _gaq.push(['_setAccount', '"  + settings.GoogleAnalyticsID + "']); _gaq.push(['_trackPageview']);  (function () {  var ga = document.createElement('script');  ga.type = 'text/javascript';  ga.async = true;  ga.src = ('https:' == document.location.protocol  ? 'https://ssl' :  'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0];  s.parentNode.insertBefore(ga, s);})();"; System.Web.UI.HtmlControls.HtmlGenericControl si =  new System.Web.UI.HtmlControls.HtmlGenericControl(); si.TagName = "script"; si.Attributes.Add("type", @"text/javascript"); si.InnerHtml = sb.ToString(); this.Page.Header.Controls.Add(si); } } The code above will prevent the code from executing if it is a PostBack and then if the ID was not able to be read or something caused the settings to be lost by accident. If you have larger function to declare, you can use a StringBuilder to separate the lines. This is the most compact I wished to go and manage readability.

    Read the article

  • Obfuscation is not a panacea

    - by simonc
    So, you want to obfuscate your .NET application. My question to you is: Why? What are your aims when your obfuscate your application? To protect your IP & algorithms? Prevent crackers from breaking your licensing? Your boss says you need to? To give you a warm fuzzy feeling inside? Obfuscating code correctly can be tricky, it can break your app if applied incorrectly, it can cause problems down the line. Let me be clear - there are some very good reasons why you would want to obfuscate your .NET application. However, you shouldn't be obfuscating for the sake of obfuscating. Security through Obfuscation? Once your application has been installed on a user’s computer, you no longer control it. If they do not want to pay for your application, then nothing can stop them from cracking it, even if the time cost to them is much greater than the cost of actually paying for it. Some people will not pay for software, even if it takes them a month to crack a $30 app. And once it is cracked, there is nothing stopping them from putting the result up on the internet. There should be nothing suprising about this; there is no software protection available for general-purpose computers that cannot be cracked by a sufficiently determined attacker. Only by completely controlling the entire stack – software, hardware, and the internet connection, can you have even a chance to be uncrackable. And even then, someone somewhere will still have a go, and probably succeed. Even high-end cryptoprocessors have known vulnerabilities that can be exploited by someone with a scanning electron microscope and lots of free time. So, then, why use obfuscation? Well, the primary reason is to protect your IP. What obfuscation is very good at is hiding the overall structure of your program, so that it’s very hard to figure out what exactly the code is doing at any one time, what context it is running in, and how it fits in with the rest of the application; all of which you need to do to understand how the application operates. This is completely different to cracking an application, where you simply have to find a single toggle that determines whether the application is licensed or not, and flip it without the rest of the application noticing. However, again, there are limitations. An obfuscated application still has to run in the same way, and do the same thing, as the original unobfuscated application. This means that some of the protections applied to the obfuscated assembly have to be undone at runtime, else it would not run on the CLR and do the same thing. And, again, since we don’t control the environment the application is run on, there is nothing stopping a user from undoing those protections manually, and reversing some of the obfuscation. It’s a perpetual arms race, and it always will be. We have plenty of ideas lined about new protections, and the new protections added in SA 6.6 (method parent obfuscation and a new control flow obfuscation level) are specifically designed to be harder to reverse and reconstruct the original structure. So then, by all means, obfuscate your application if you want to protect the algorithms and what the application does. That’s what SmartAssembly is designed to do. But make sure you are clear what a .NET obfuscator can and cannot protect you against, and don’t expect your obfuscated application to be uncrackable. Someone, somewhere, will crack your application if they want to and they don’t have anything better to do with their time. The best we can do is dissuade the casual crackers and make it much more difficult for the serious ones. Cross posted from Simple Talk.

    Read the article

  • Developing a Support Plan for Cloud Applications

    - by BuckWoody
    Last week I blogged about developing a High-Availability plan. The specifics of a given plan aren't as simple as "Step 1, then Step 2" because in a hybrid environment (which most of us have) the situation changes the requirements. There are those that look for simple "template" solutions, but unless you settle on a single vendor and a single way of doing things, that's not really viable. The same holds true for support. As I've mentioned before, I'm not fond of the term "cloud", and would rather use the tem "Distributed Computing". That being said, more people understand the former, so I'll just use that for now. What I mean by Distributed Computing is leveraging another system or setup to perform all or some of a computing function. If this definition holds true, then you're essentially creating a partnership with a vendor to run some of your IT - whether that be IaaS, PaaS or SaaS, or more often, a mix. In your on-premises systems, you're the first and sometimes only line of support. That changes when you bring in a Cloud vendor. For Windows Azure, we have plans for support that you can pay for if you like. http://www.windowsazure.com/en-us/support/plans/ You're not off the hook entirely, however. You still need to create a plan to support your users in their applications, especially for the parts you control. The last thing they want to hear is "That's vendor X's problem - you'll have to call them." I find that this is often the last thing the architects think about in a solution. It's fine to put off the support question prior to deployment, but I would hold off on calling it "production" until you have that plan in place. There are lots of examples, like this one: http://www.va-interactive.com/inbusiness/editorial/sales/ibt/customer.html some of which are technology-specific. Once again, this is an "it depends" kind of approach. While it would be nice if there was just something in a box we could buy, it just doesn't work that way in a hybrid system. You have to know your options and apply them appropriately.

    Read the article

  • What books would I recommend?

    - by user12277104
    One of my mentees (I have three right now) said he had some time on his hands this Summer and was looking for good UX books to read ... I sigh heavily, because there is no shortage of good UX books to read. My bookshelves have titles by well-read authors like Nielsen, Norman, Tufte, Dumas, Krug, Gladwell, Pink, Csikszentmihalyi, and Roam. I have titles buy lesser-known authors, many whom I call friends, and many others whom I'll likely never meet. I have books on Excel pivot tables, typography, mental models, culture, accessibility, surveys, checklists, prototyping, Agile, Java, sketching, project management, HTML, negotiation, statistics, user research methods, six sigma, usability guidelines, dashboards, the effects of aging on cognition, UI design, and learning styles, among others ... many others. So I feel the need to qualify any book recommendations with "it depends ...", because it depends on who I'm talking to, and what they are looking for.  It's probably best that I also mention that the views expressed in this blog are mine, and may not necessarily reflect the views of Oracle. There. I'm glad I got that off my chest. For that mentee, who will be graduating with his MS HFID + MBA from Bentley in the Fall, I'll recommend this book: Universal Principles of Design -- this is a great book, which in its first edition held "100  ways to enhance usability, influence perception, increase appeal, make better design decisions, and teach through design." Granted, the second edition expanded that number to 125, but when I first found this book, I felt like I'd discovered the Grail. Its research-based principles are all laid out in 2 pages each, with lots of pictures and good references. A must-have for the new grad. Do I have recommendations for a book that will teach you how to conduct a usability test? Yes, three of them. To communicate what we do to management? Yes. To create personas? Yep -- two or three. Help you with UX in an Agile environment? You bet, I've got two I'd recommend. Create an excellent presentation? Uh hunh. Get buy-in from your team? Of course. There are a plethora of excellent UX books out there. But which ones I recommend ... well ... it depends. 

    Read the article

  • StringBuffer behavior in LWJGL

    - by Michael Oberlin
    Okay, I've been programming in Java for about ten years, but am entirely new to LWJGL. I have a specific problem whilst attempting to create a text console. I have built a class meant to abstract input polling to it, which (in theory) captures key presses from the Keyboard object and appends them to a StringBuilder/StringBuffer, then retrieves the completed string after receiving the ENTER key. The problem is, after I trigger the String return (currently with ESCAPE), and attempt to print it to System.out, I consistently get a blank line. I can get an appropriate string length, and I can even sample a single character out of it and get complete accuracy, but it never prints the actual string. I could swear that LWJGL slipped some kind of thread-safety trick in while I wasn't looking. Here's my code: static volatile StringBuffer command = new StringBuffer(); @Override public void chain(InputPoller poller) { this.chain = poller; } @Override public synchronized void poll() { //basic testing for modifier keys, to be used later on boolean shift = false, alt = false, control = false, superkey = false; if(Keyboard.isKeyDown(Keyboard.KEY_LSHIFT) || Keyboard.isKeyDown(Keyboard.KEY_RSHIFT)) shift = true; if(Keyboard.isKeyDown(Keyboard.KEY_LMENU) || Keyboard.isKeyDown(Keyboard.KEY_RMENU)) alt = true; if(Keyboard.isKeyDown(Keyboard.KEY_LCONTROL) || Keyboard.isKeyDown(Keyboard.KEY_RCONTROL)) control = true; if(Keyboard.isKeyDown(Keyboard.KEY_LMETA) || Keyboard.isKeyDown(Keyboard.KEY_RMETA)) superkey = true; while(Keyboard.next()) if(Keyboard.getEventKeyState()) { command.append(Keyboard.getEventCharacter()); } if (Framework.isConsoleEnabled() && Keyboard.isKeyDown(Keyboard.KEY_ESCAPE)) { System.out.println("Escape down"); System.out.println(command.length() + " characters polled"); //works System.out.println(command.toString().length()); //works System.out.println(command.toString().charAt(4)); //works System.out.println(command.toString().toCharArray()); //blank line! System.out.println(command.toString()); //blank line! Framework.disableConsole(); } //TODO: Add command construction and console management after that } } Maybe the answer's obvious and I'm just feeling tired, but I need to walk away from this for a while. If anyone sees the issue, please let me know. This machine is running the latest release of Java 7 on Ubuntu 12.04, Mate desktop environment. Many thanks.

    Read the article

  • JavaOne 2013: (Key) Notes of a conference – State of the Java platform and all the roadmaps by Amis

    - by JuergenKress
    Last week’s JavaOne conference provided insights in the roadmap of the Java platform as well as in the current state of things in the Java community. The close relationship between Oracle and IBM concerning Java, the (continuing) lack of such a relationship with Google, the support from Microsoft for Java applications on its Azure cloud and the vibrant developer community – with over 200 different Java User Groups in many countries of the world. There were no major surprises or stunning announcements. Java EE 7 (release in June) was celebrated, the progress of Java 8 SE explained as well as the progress on Java Embedded and ME. The availability of NetBeans 7.4 RC1 and JDK 8 Early Adopters release as well as the open sourcing of project Avatar probably were the only real news stories. The convergence of JavaFX and Java SE is almost complete; the upcoming alignment of Java SE Embedded and Java ME is the next big consolidation step that will lead to a unified platform where developers can use the same skills, development tools and APIs on EE, SE, SE Embedded and ME development. This means that anything that runs on ME will run on SE (Embedded) and EE – not necessarily the reverse because not all SE APIs are part of the compact profile or the ME environment. However, the trimming down of the SE libraries and the increased capabilities of devices mean that a pretty rich JVM runs on many devices – such as JavaFX 8 on the Raspberry PI. The major theme of the conference was Internet of Things. A world of things that are smart and connected, devices like sensors, cameras and equipment from cars, fridges and television sets to printers, security gates and kiosks that all run Java and are all capable of sending data over local network connections or directly over the internet. The number of devices that has these capabilities is rapidly growing. This means that the number of places where Java programs can help program the behavior of devices is growing too. It also means that the volume of data generated is expanding and that we have to find ways to harvest that data, possibly do a local pre-processing (filter, aggregate) and channel the data to back end systems. Terms typically used are edge devices (small, simple, publishing data), gateways (receiving data from many devices, collecting and consolidating, pre-processing, sending onwards to back end – typically using real time event processing) and enterprise services – receiving the data-turned-information from the gateways to further consolidate, distribute and act upon. A cheap device like the Raspberry PI is a perfect way to get started as a Java developer with what embedded (device) programming means and how interaction with physical input and output takes place. Roadmaps The over all progress on Java is visualized in this overview: Read the full article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Amis,OOW,Oracle OpenWorld,JavaOne,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Movement prediction for non-shooters

    - by ShadowChaser
    I'm working on an isometric (2D) game with moderate-scale multiplayer - 20-30 players. I've had some difficulty getting a good movement prediction implementation in place. Right now, clients are authoritative for their own position. The server performs validation and broad-scale cheat detection, and I fully realize that the system will never be fully robust against cheating. However, the performance and implementation tradeoffs work well for me right now. Given that I'm dealing with sprite graphics, the game has 8 defined directions rather than free movement. Whenever the player changes their direction or speed (walk, run, stop), a "true" 3D velocity is set on the entity and a packet it sent to the server with the new movement state. In addition, every 250ms additional packets are transmitted with the player's current position for state updates on the server as well as for client prediction. After the server validates the packet, it gets automatically distributed to all of the other "nearby" players. Client-side, all entities with non-zero velocity (ie/ moving entities) are tracked and updated by a rudimentary "physics" system - basically nothing more than changing the position by the velocity according to the elapsed time slice (40ms or so). What I'm struggling with is how to implement clean movement prediction. I have the nagging suspicion that I've made a design mistake somewhere. I've been over the Unreal, Half-life, and all other movement prediction/lag compensation articles I could find, but they all seam geared toward shooters: "Don't send each control change, send updates every 120ms, server is authoritative, client predicts, etc". Unfortunately, that style of design won't work well for me - there's no 3D environment so each individual state change is important. 1) Most of the samples I saw tightly couple movement prediction right into the entities themselves. For example, storing the previous state along with the current state. I'd like to avoid that and keep entities with their "current state" only. Is there a better way to handle this? 2) What should happen when the player stops? I can't interpolate to the correct position, since they might need to walk backwards or another strange direction if their position is too far ahead. 3) What should happen when entities collide? If the current player collides with something, the answer is simple - just stop the player from moving. But what happens if two entities take up the same space on the server? What if the local prediction causes a remote entity to collide with the player or another entity - do I stop them as well? If the prediction had the misfortune of sticking them in front of a wall that the player has gone around, the prediction will never be able to compensate and once the error gets to high the entity will snap to the new position.

    Read the article

  • Ubuntu 12.4.1 failing in vm both Vbox and Vmware on new HP Envy 4t-1000

    - by Chas
    Brand new to Linux, getting frustrated trying to get an environment up with Ubuntu. My primary goal is to learn Linux and Apache/PHP development. I need to keep my Windows OS as main on my machine for work, so i'm trying to virtualize Ubuntu 12.4.1 without luck (many attempts). I have a new HP Envy 4t-1000 with 16gb ram, and 32 gb ssd caching with 500gb spindle hard drive. Graphics card is an Intel HD 3000 with AMD Radeon 7670M. With installing Ubuntu desktop in VBox, I'm getting this result: https://forums.virtualbox.org/viewtopic.php?f=6&t=51939 With VMware workstation 7 (patched), I complete the install of Ubuntu, it reboots, purple desktop briefly flashes then it drops to command line. I bought a beginning Ubuntu book, and it recommends trying to manually configure graphics if this happens. So I tried doing a safe boot holding shift - I get to the first screen (GRUB) loads fine, and I choose recovery mode. After choosing the recovery mode, I get the recovery mode options, and can arrow down to what the book suggests 'Run in fail safe graphic mode.' Once I select this option, I get a black screen with a large white dialogue box, at the top it says "The system is running in low-graphics mode. Your screen, graphics card, and input device settings could not be detected correctly. You will need to configure these yourself." Then there is an ok button way down at the bottom. When I select 'ok' I get a menu for a few options, book recommended 'reconfigure graphics.' When I try this, I get a menu of two options: 1) "Use generic (default) configuration or 2) use backup. I've tried both options several times, hitting ok just refreshes screen and nothing more. Rebooting at this point just goes back to command line as before. I don't know what to do at this point, I've spent too many hours this weekend trying in both VBox and VMware to get Ubuntu going. Isn't there like a very basic graphic display or something I can use to at least get into the desktop? I explored the GRUB some more, and tried to look at the startup and xserver logs - both are blank. No help there I guess? When I try to choose 'Edit the configuration file, then 'ok' screen just refreshes on same menu options, nothing happens. thx for any advice. I really need to focus on learning Linux, Apache and PHP, so perhaps Ubuntu just won't work on my hardware? Any other suggestions? I will need to virtualize - THANKS for any help/advice.

    Read the article

  • Developing a Support Plan for Cloud Applications

    - by BuckWoody
    Last week I blogged about developing a High-Availability plan. The specifics of a given plan aren't as simple as "Step 1, then Step 2" because in a hybrid environment (which most of us have) the situation changes the requirements. There are those that look for simple "template" solutions, but unless you settle on a single vendor and a single way of doing things, that's not really viable. The same holds true for support. As I've mentioned before, I'm not fond of the term "cloud", and would rather use the tem "Distributed Computing". That being said, more people understand the former, so I'll just use that for now. What I mean by Distributed Computing is leveraging another system or setup to perform all or some of a computing function. If this definition holds true, then you're essentially creating a partnership with a vendor to run some of your IT - whether that be IaaS, PaaS or SaaS, or more often, a mix. In your on-premises systems, you're the first and sometimes only line of support. That changes when you bring in a Cloud vendor. For Windows Azure, we have plans for support that you can pay for if you like. http://www.windowsazure.com/en-us/support/plans/ You're not off the hook entirely, however. You still need to create a plan to support your users in their applications, especially for the parts you control. The last thing they want to hear is "That's vendor X's problem - you'll have to call them." I find that this is often the last thing the architects think about in a solution. It's fine to put off the support question prior to deployment, but I would hold off on calling it "production" until you have that plan in place. There are lots of examples, like this one: http://www.va-interactive.com/inbusiness/editorial/sales/ibt/customer.html some of which are technology-specific. Once again, this is an "it depends" kind of approach. While it would be nice if there was just something in a box we could buy, it just doesn't work that way in a hybrid system. You have to know your options and apply them appropriately.

    Read the article

  • Java @Contented annotation to help reduce false sharing

    - by Dave
    See this posting by Aleksey Shipilev for details -- @Contended is something we've wanted for a long time. The JVM provides automatic layout and placement of fields. Usually it'll (a) sort fields by descending size to improve footprint, and (b) pack reference fields so the garbage collector can process a contiguous run of reference fields when tracing. @Contended gives the program a way to provide more explicit guidance with respect to concurrency and false sharing. Using this facility we can sequester hot frequently written shared fields away from other mostly read-only or cold fields. The simple rule is that read-sharing is cheap, and write-sharing is very expensive. We can also pack fields together that tend to be written together by the same thread at about the same time. More generally, we're trying to influence relative field placement to minimize coherency misses. Fields that are accessed closely together in time should be placed proximally in space to promote cache locality. That is, temporal locality should condition spatial locality. Fields accessed together in time should be nearby in space. That having been said, we have to be careful to avoid false sharing and excessive invalidation from coherence traffic. As such, we try to cluster or otherwise sequester fields that tend to written at approximately the same time by the same thread onto the same cache line. Note that there's a tension at play: if we try too hard to minimize single-threaded capacity misses then we can end up with excessive coherency misses running in a parallel environment. Theres no single optimal layout for both single-thread and multithreaded environments. And the ideal layout problem itself is NP-hard. Ideally, a JVM would employ hardware monitoring facilities to detect sharing behavior and change the layout on the fly. That's a bit difficult as we don't yet have the right plumbing to provide efficient and expedient information to the JVM. Hint: we need to disintermediate the OS and hypervisor. Another challenge is that raw field offsets are used in the unsafe facility, so we'd need to address that issue, possibly with an extra level of indirection. Finally, I'd like to be able to pack final fields together as well, as those are known to be read-only.

    Read the article

< Previous Page | 446 447 448 449 450 451 452 453 454 455 456 457  | Next Page >