Search Results

Search found 20869 results on 835 pages for 'things i hate'.

Page 733/835 | < Previous Page | 729 730 731 732 733 734 735 736 737 738 739 740  | Next Page >

  • Ubuntu 13.04 installation issues: unable to handle kernel paging request error

    - by user173944
    I wish I could say that I’ve done more for the Linux community as of recent but I am very VERY new to all of this and I feel very much in over my head. I figured I would install Ubuntu. on my computer and then I would learn and contribute to the community simultaneously. I will try to be as detailed as I can, please ask questions if you need clarification. I installed Ubuntu. 13.04 (64-bit) on my dell Inspiron 1501. This has an AMD Turion 64-bit TL-56 1.8 Ghz mobile processor. It is a dual core. It has an ATI Radeon Xpress 1150 chipset in it as well. As of right now I only have a total of 2Ghz ram, however I was planning on upgrading that in the near future so I opted for the 64-bit Ubuntu. 13.04. I first tried the live CD and everything seemed to be functioning correctly other than the wireless (but that's not the issue at hand, there are plenty of guides on the internet on how to get that functioning). The internet worked just fine when it was plugged in so no issues there. However, once I went from that to installing 13.04 (just 13.04, no dual partitioning... I want this computer to run strictly Ubuntu.) it did not work. It took me into a shell that I could not type anything into. In this shell it said Bug: unable to handle kernel paging and then it called a bunch of traces and froze up. I had to hard reset the laptop. I tried the boot-repair program multiple times with many different settings and typically after starting up the laptop would say something along the lines of recursive errors. will attempt to fix and then it would attempt to fix a couple of things, and then the computer would freeze up after the text said end trace... so I had to hard reset it again. I'm not an impatient person either, when I say it would freeze up it would be for a period of at least 15 minutes each time before I decided to hard reset. I attempted to install 12.10 on it instead and I got the same exact message, and when I ran boot-repair it did the same exact thing as before. I am currently in the process of running memtest64+ on the computer's memory, though I really don't believe that, nor any of the hardware is at fault due to the fact that it was still running windows vista perfectly when I had decided to switch over to Ubuntu. so far the memtest has came back just fine without any errors, but I’ve only been running it for approximately an hour. So this is the situation I’m in. I did notice when I was using the live disk that the video driver needed updated so I performed that, though I’m fairly certain that has nothing to do with this. I have also attempted (though I’m not certain that my attempt was successful in accomplishing what I had planned) to manually edit the grub settings by making acpi=0 along top of adding nomodeset to the boot commands. Like I said, I’m not sure I did that correctly though, but I’m fairly certain I did. If anyone needs any more information I will be more than happy to provide it, I will post back once I get the full results of the memtest. I very much appreciate any ideas anyone else has, I’ve been at this for a few days to no avail... thank you

    Read the article

  • Using the latest (stable release) of Oracle Developer Tools for Visual Studio 11.1.0.7.20.

    - by mbcrump
    +  = Simple and safe Data connections.   This guide is for someone wanting to use the latest ODP.NET quickly without reading the official documentation. This guide will get you up and running in about 15 minutes. I have reviewed my referral link to my simple Setting up ODP.net with Win7 x64 and noticed most people were searching for one of the following terms: “how to use odp.net with vs” “setup connection odp.net” “query db using odp and vs” While my article provided links and a sample tnsnames.ora file, it really didn’t tell you how to use it. I’m hoping that this brief tutorial will help. So before we get started, you will need the following: Download the following: www.oracle.com/technology/software/tech/dotnet/utilsoft.html from oracle and install it. It is the first one on the page. Visual Studio 2008 or 2010. It should be noted that The System.Data.OracleClient namespace is the OLD .NET Framework Data Provider for Oracle. It should not be used anymore as it has been depreciated. The latest version which is what we are using is Oracle.DataAccess.Client. First things first, Add a reference to the Oracle.DataAccess.Client after you install ODP.NET   Copy and paste the following C# code into your project and replace the relevant info including the query string and you should be able to return data. I have commented several lines of code to assist in understanding what it is doing.   Lambda Expression. using System; using System.Data; using Oracle.DataAccess.Client;   namespace ConsoleApplication1 {     class Program     {         static void Main(string[] args)         {           try         {             //Setup DataSource             string oradb = "Data Source=(DESCRIPTION ="                                    + "(ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = hostname)(PORT = 1521)))"                                    + "(CONNECT_DATA = (SERVICE_NAME = SERVICENAME))) ;"                                    + "Persist Security Info=True;User ID=USER;Password=PASSWORD;";                        //Open Connection to Oracle - this could be moved outside the try.             OracleConnection conn = new OracleConnection(oradb);             conn.Open();               //Create cmd and use parameters to prevent SQL injection attacks.             OracleCommand cmd = new OracleCommand();             cmd.Connection = conn;               cmd.CommandText = "select username from table where username = :username";               OracleParameter p1 = new OracleParameter("username", OracleDbType.Varchar2);             p1.Value = username;             cmd.Parameters.Add(p1);               cmd.CommandType = CommandType.Text;               OracleDataReader dr = cmd.ExecuteReader();             dr.Read();               //Contains the value of the datarow             Console.WriteLine(dr["username"].ToString());               //Disposes of objects.             dr.Dispose();             cmd.Dispose();             conn.Dispose();         }           catch (OracleException ex) // Catches only Oracle errors         {             switch (ex.Number)             {                 case 1:                     Console.WriteLine("Error attempting to insert duplicate data.");                     break;                 case 12545:                     Console.WriteLine("The database is unavailable.");                     break;                 default:                     Console.WriteLine(ex.Message.ToString());                     break;             }         }           catch (Exception ex) // Catches any error not previously caught         {                   Console.WriteLine("Unidentified Error: " + ex.Message.ToString());              }         }       }           } At this point, you should have a working Program that returns data from an oracle database. If you are still having trouble then drop me a line and I will be happy to assist. As of this writing, oracle has announced the latest beta release of ODP.NET 11.2.0.1.1 Beta.  This release includes .NET Framework 4 and .NET Framework 4 Client Profile support. You may want to hold off on this version for a while as its BETA, and I wouldn’t want any production code using any BETA software.

    Read the article

  • Prefilling an SMS on Mobile Devices with the sms: Uri Scheme

    - by Rick Strahl
    Popping up the native SMS app from a mobile HTML Web page is a nice feature that allows you to pre-fill info into a text for sending by a user of your mobile site. The syntax is a bit tricky due to some device inconsistencies (and quite a bit of wrong/incomplete info on the Web), but it's definitely something that's fairly easy to do.In one of my Mobile HTML Web apps I need to share a current location via SMS. While browsing around a page I want to select a geo location, then share it by texting it to somebody. Basically I want to pre-fill an SMS message with some text, but no name or number, which instead will be filled in by the user.What worksThe syntax that seems to work fairly consistently except for iOS is this:sms:phonenumber?body=messageFor iOS instead of the ? use a ';' (because Apple is always right, standards be damned, right?):sms:phonenumber;body=messageand that works to pop up a new SMS message on the mobile device. I've only marginally tested this with a few devices: an iPhone running iOS 6, an iPad running iOS 7, Windows Phone 8 and a Nexus S in the Android Emulator. All four devices managed to pop up the SMS with the data prefilled.You can use this in a link:<a href="sms:1-111-1111;body=I made it!">Send location via SMS</a>or you can set it on the window.location in JavaScript:window.location ="sms:1-111-1111;body=" + encodeURIComponent("I made it!");to make the window pop up directly from code. Notice that the content should be URL encoded - HTML links automatically encode, but when you assign the URL directly in code the text value should be encoded.Body onlyI suspect in most applications you won't know who to text, so you only want to fill the text body, not the number. That works as you'd expect by just leaving out the number - here's what the URLs look like in that case:sms:?body=messageFor iOS same thing except with the ;sms:;body=messageHere's an example of the code I use to set up the SMS:var ua = navigator.userAgent.toLowerCase(); var url; if (ua.indexOf("iphone") > -1 || ua.indexOf("ipad") > -1) url = "sms:;body=" + encodeURIComponent("I'm at " + mapUrl + " @ " + pos.Address); else url = "sms:?body=" + encodeURIComponent("I'm at " + mapUrl + " @ " + pos.Address); location.href = url;and that also works for all the devices mentioned above.It's pretty cool that URL schemes exist to access device functionality and the SMS one will come in pretty handy for a number of things. Now if only all of the URI schemes were a bit more consistent (damn you Apple!) across devices...© Rick Strahl, West Wind Technologies, 2005-2013Posted in IOS  JavaScript  HTML5   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Stuck with Documentum Still? Do MORE with Oracle WebCenter!

    - by Michael Snow
    WEBCAST TODAY!! 03/22/12 Do you need to lower costs? Raise Productivity? Foster Innovation? Improve Online Engagement? But you’re still stuck with Documentum? Step away from the ledge – there is hope – let us help you. Top 4 Content Imperatives · Lower Costs - Reduce labor, maintenance fees, storage and electrical consumption · Raise Productivity - Automation and integration, communication, findability · Foster Innovation - Enable collaboration, expertise location · Improve Online Engagement – enable user-driven, dynamic marketing initiatives With the coming technology wave we see four content imperatives. Every organization has had to reduce costs, cost cutting has become a way of life. Everyone is working three jobs as positions are eliminated. And so we have to reduce labor, reduce maintenance, and reduce money we are wasting on things like storing content that is redundant or no longer useful. We also, to fill that gap, need to raise productivity. Knowledge workers represent the fastest growing segment of the workforce, accounting for 40%-75% of the employees at organizations in sectors like financial services, life sciences, healthcare and retail.  What’s more, their wages total 18 percent of the United States GDP. And so we can’t afford information systems that don’t let our top performers be the best they can be. We look to automate the content processes, provide ways to integrate that content into our processes, provide communication to make decisions, and to make content more findable so people can make the right decision and move the process forward. And really to get ourselves out of the current financial status, we can only cut costs so far. We have to innovate out of economic tough times – to find new products and new markets. And to enable the innovation process, we have to enable collaboration and expertise location. So much of innovation is about building on innovations that have come before. To solve problems, we have to be able to find what our organization has already created. We find that problems we need to solve have already been solved if we can find the right document, the right person. So we have to provide systems that enable us to stand on the shoulders of our organization’s accomplishments. Good content drives great marketing. Online engagement is growing as an absolute necessity for modern growing marketing organizations that require the business users be enabled for dynamic marketing content creation, updates and targeted content creation and management. Unfortunately – if you are currently stuck with Documentum, you are really lacking in your Web Experience Management capabilities. Documentum previously used FatWire for web publishing. Now FatWire is part of Oracle. Oracle provides powerful web engagement capabilities: Increase sales and loyalty by optimizing online engagement Create, manage and moderate contextually relevant, targeted and interactive online experiences Optimize customer engagement across, web, mobile and social channels Manage large scale multichannel global online presence with integration to enterprise applications Enable business users to control their content and make their own updates Publish content from native files – enable navigation of project documents, procedures, policy information Enable content display and updates from existing web applications – one click to drag and drop content management functionality So you get the ability to self-publish information and make it navigable, to move the process of publishing from IT to business users, and the ability to address a whole new area of user engagement with web experience management. So… if you are still stuck with Documentum and don’t know what to do – contact us – not only will Oracle help you step away from the ledge, but also with the MoveOff Documentum program, we are offering you a way – trade-in your Documentum licenses for a 100% credit on Oracle WebCenter. How’s that for a nice bonus? It’s time to stop maintaining Documentum, and to start innovating with Oracle WebCenter. Learn More Here! To learn more about what Oracle WebCenter can offer you today – join us for a webcast – your eyes will be opened to all that’s possible. Do More with WebCenter: Extend Beyond Content Management

    Read the article

  • Have Your Cake and Eat it Too: Industry Best Practices + Flexibility

    - by Oracle Accelerate for Midsize Companies
    By Richard Garraputa, VP of Sales & Marketing, brij Richard joined brij in 1996 after graduating from the University of North Carolina at Greensboro with degrees in Information Systems and Accounting. He directs brij’s overall strategies of both the business development and marketing departments. Companies looking for new ERP systems spend so much time comparing features and functions of software products but too often short change the value of their own processes.  Company managers I meet often claim that they are implementing a new ERP system so they can perform better and faster.  When asked how, the answer is often “by implementing best practices”.  But the term ‘best practices’ is frequently used to mean ‘doing things the way everyone else does them’ rather than a starting point or benchmark to build upon by adding your own value. Of course, implementing standardized processes across an enterprise is an important step in improving operational efficiencies.  But not all companies are alike.  Do you ever tell your customers “We are just like our competition and have no competitive differentiation”?  Probably not.  So why should the implementation of your business processes be just like your competitor’s?  Even within the same industry, companies differentiate themselves by leveraging their unique expertise and approach to business.  These unique aspects—the competitive differentiators that companies use to thrive in a crowded marketplace—can and should be supported by the implementation of business systems like ERP. Modern ERP systems like Oracle’s JD Edwards EnterpriseOne have a broad and deep functional footprint designed to integrate a company’s core operations.  But how can a company take advantage of this footprint without blowing up their implementation budget?  Some ERP vendors claim to solve this challenge by stating that their systems come pre-configured with ‘best practices’.  Too often what they are really saying is that you will have to abandon your key operational differentiators to fit a vendor’s template for your business—or extend your implementation and postpone the realization of any benefits. Thankfully for midsize companies, there is an alternative to the undesirable options of extended implementation projects or abandoning their competitive differentiators.  Oracle Accelerate Solutions speed the time it takes to implement JD Edwards EnterpriseOne solution based on your unique business characteristics, getting your new ERP system up and running faster without forcing your business to fit a cookie-cutter solution. We’ve been a JD Edwards implementation partner since 1986 and we now leverage Oracle Business Accelerators—cloud based rapid implementation tools built and maintained by Oracle. Oracle Business Accelerators deliver the benefits of embedded industry best practices without forcing every customer in to one set of processes like many template or “clone and go” approaches do. You retain the ability to reconfigure your applications—without customization—as your business changes. Wielded by Oracle partners with industry-specific domain expertise, Oracle Accelerate Solution implementations powered by Oracle Business Accelerators help automate the application configuration to fit your business better, faster. For example, on a recent project at a manufacturing company, the project manager told me that Oracle Business Accelerators helped get them to Conference Room Pilot 20% faster than with a traditional approach. Time savings equal cost savings. And if ‘better and faster’ is your goal for your business performance, shouldn’t it be the goal for your ERP implementation as well? Established in 1986, brij has been dedicated solely to helping its customers implement Oracle’s JD Edwards solutions and to maximize the value of those customers’ IT investments. They are a Gold level member in Oracle PartnerNetwork and an Oracle Accelerate Solution provider.

    Read the article

  • SQLAuthority News – Stay Connected and Social Media

    - by pinaldave
    I think I have finally gotten back my faith in social media. If you are following my blog I am sure you are aware of my views on social media – SQLAuthority News – Social Media Confusion – Twitter, FaceBook, LinkedIn and Me. I was not happy about how social media was evolving. Whenever I go to Twitter, LinkedIn or Facebook, I noticed the same updates everywhere. I just thought I was wasting my time doing the same thing everywhere. I strongly believe that there is no dictator on internet. Nobody has authority over others, everybody can express their ideas as long as it is not violating others privacy and it is not morally wrong. I have decided that instead of trying to improve the world, I should change myself and adjust my needs. Here are few things I have done to relieve my social media confusion. Twitter I un-followed people who were taking up my time with too many updates. I un-followed people who hardly updated at all. I did not follow anybody else’s list, as I have no control over who other people follow. I follow not only serious SQL people but some fun stuff as well. I removed all my friends who were on Facebook and repeating the same updates on Twitter. I engage with them on Facebook. I followed people who are very conversational on Twitter. I let anybody follow me. I update all my blog posts through at least five tweets online. I decided to re-tweet at least five of my favorite tweets of the day, this way I force myself to remain active in the community. Follow me on Twitter! LinkedIn I updated my career and professional info on LinkedIn. I keep my LinkedIn profile updated with my latest jobs and career news. I let anybody connect with me on LinkedIn. I specify my email address in my profile, keeping it easy for those who want to add me. I read all the profile related updates of my connections – it is very valuable to know who is where and what changes are happening. I do not add my personal tweets or comments in LinkedIn profile. I just keep it professional. Link with me at LinkedIn Facebook I use Facebook only for personal friends. I visit all of my friends at regular intervals and make sure that they are really my friends. I often remove my friends from my Twitter list who are sending duplicate updates. I upload my family photos as well as family updates on Facebook, making sure that only my approved friends are able to read my updates. I keep my Facebook very personal and I often chat with my friends on Facebook chat. I am no longer confused about social media and I think I am using it appropriately. As I said, one cannot decide for others how to use social media, you can only decide for yourself. I have finally found my peace with social media. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Dissing Architects, or "What's wrong with the coffee?"

    - by Bob Rhubart
    In my conversations with people in architect roles, tales of animosity, disrespect, and outright hostility aren't uncommon. And it's clear that in more than a few organizations architects regularly face a tough uphill climb. For architects with the requisite combination of technical, organizational, and people skills, that rough treatment is grossly undeserved. But tales of unqualified people in positions up and down the IT food chain are also easy to come by. So what's the other side of the architect story? Are some architects tarnishing the role and making life miserable for their more qualified colleagues? The various quotes included below were culled from a variety of sources. The criticism is harsh, and the people behind these quotes clearly have issues with architects. Still, whether based on mere opinion or actual experience, the comments shed some light on behaviors that should raise red flags for anyone pursuing a career as an architect. If you're an architect, and you've ever noticed that your coffee tastes like window cleaner, or your car is repeatedly keyed, or no one ever holds the elevator for you, maybe you need to do a little soul searching... Those Who Can, Code; Those Who Can't, Architect | Joe Winchester [May 18, 2007] "At the moment there seems to be an extremely unhealthy obsession in software with the concept of architecture. A colleague of mine, a recent graduate, told me he wished to become a software architect. He was drawn to the glamour of being able to come up with grandiose ideas - sweeping generalized designs, creating presentations to audiences of acronym addicts, writing esoteric academic papers, speaking at conferences attended by headless engineers on company expense accounts hungrily seeking out this year's grail, and creating e-mails with huge cc lists from people whose signature footer is more interesting than the content. I tried to re-orient him into actually doing some coding, to join a team that has a good product and keen users both of whom are pushing requirements forward, to no avail. Somehow the lure of being an architecture astronaut was too strong and I lost him to the dark side." Don't Let Architecture Astronauts Scare You | Joel Spolsky [April 21, 2001] "It's very hard to get them to write code or design programs, because they won't stop thinking about Architecture. They're astronauts because they are above the oxygen level, I don't know how they're breathing. They tend to work for really big companies that can afford to have lots of unproductive people with really advanced degrees that don't contribute to the bottom line. Remember that the architecture people are solving problems that they think they can solve, not problems which are useful to solve." Non Coding Architects Suck | Richard Henderson [May 24, 2010] "If a guy with a badge saying 'system architect' looks blank on low-level issues then he is not an architect, he is a business-analyst who went on a course. He will probably wax lyrical on all things high-level and 'important.' He will produce lovely object hierarchies without a clue to implementation. He will have a moustache and play golf." Architects Play Golf | Sunir Shah [August 15, 2012] "Often arrogant architects are difficult to get a hold of during the implementation phase because they no longer feel the need to stick around. Especially around midnight when most of the poor sob [sic] developers are still banging away. After all, they've already solved the problem--the rest is just an implementation exercise." Engineer vs Architect(Part of a discussion on the IT Architect Network Group on LinkedIn) "[An] architect spends his time producing white papers full of acronyms he does not understand but that impress his boss [while the] engineer keeps his head down and does the actual job." Architects Don't Code | [Author Unknown] "Faulty belief: System Architects don't need to code anymore. They know what they are talking about by virtue of the fact that they are System Architects."

    Read the article

  • What DX level does my graphics card support? Does it go to 11?

    - by Daniel Moth
    Recently I run into a situation that I have run into quite a few times. Someone encounters a machine and the question arises: "Is there a DirectX 11 card in this machine?". Typically the reason you are interested in that is because cards with DirectX 11 drivers fully support DirectCompute (and by extension C++ AMP) for GPGPU programming. The driver specifically is WDDM (1.1 on Windows 7 and Windows 8 introduces WDDM 1.2 with cool new capabilities). There are many ways for figuring out if you have a DirectX11 card, so here are the approaches that you can use, with a bonus right at the end of the post. Run DxDiag WindowsKey + R, type DxDiag and hit Enter. That is the DirectX diagnostic tool, which unfortunately, only tells you on the "System" tab what is the highest version of DirectX installed on your machine. So if it reports DirectX 11, that doesn't mean you have a DX11 driver! The "Display" tab has a promising "DDI version" label, but unfortunately that doesn't seem to be accurate on the machines I've tested it with (or I may be misinterpreting its use). Either way, this tool is not the one you want for this purpose, although it is good for telling you the WDDM version among other things. Use the Microsoft hardware page There is a Microsoft Windows 7 compatibility center, that lists all hardware (tip: use the advanced search) and you could try and locate your device there… good luck. Use Wikipedia or the hardware vendor's website Use the Wikipedia page for the vendor cards, for both nvidia and amd. Often this information will also be in the specifications for the cards on the IHV site, but is is nice that wikipedia has a single page per vendor that you can search etc. There is a column in the tables for API support where you can see the DirectX version. Check if it is one of these recommended DX11 cards You may not have a DirectX 11 card and are interested in purchasing one. While I am in no position to make recommendations, I will list here some cards from two big IHVs that we know are DirectX 11 capable. Some AMD (aka ATI) cards Low end, inexpensive DX11 hardware: Radeon 5450, 5550, 6450, 6570 Mid range (decent perf, single precision): Radeon 5750, 5770, 6770, 6790 High end (capable of double precision): Radeon 5850, 5870, 6950, 6970 Single precision APUs: AMD E-Series APUs AMD A-Series APUs Some NVIDIA cards Low end, inexpensive DX11 hardware: GeForce GT430, GT 440, GT520, GTS 450 Quadro 400, 600 Mid-range (decent perf, single precision): GeForce GTX 460, GTX 550 Ti, GTX 560, GTX 560 Ti Quadro 2000 High end (capable of double precision): GeForce GTX 480, GTX 570, GTX 580, GTX 590, GTX 595 Quadro 4000, 5000, 6000 Tesla C2050, C2070, C2075 Get the DirectX SDK and run DirectX Caps Viewer Download and install the June 2010 DirectX SDK. As part of that you now have the DirectX Capabilities Viewer utility (find it in your start menu by searching for "DirectX Caps Viewer", the filename is DXCapsViewer.exe). It will list all your devices (emulated, and real hardware ones) under the first node. Expand the hardware entries and then expand again the Direct3D 11 folder. If you see D3D_FEATURE_LEVEL_11_ under that, then your card supports feature level 11 which means it supports DirectCompute and C++ AMP. In the following screenshot of one of my old laptops, the card only goes to feature level 10. Run a utility from the web that just tells you! Of course, writing some C++ AMP code that enumerates accelerators and lists the ones that are capable is trivial. However that requires that you have redistributed the runtime, so a more broadly applicable approach is to use the DX APIs directly to enumerate the DX11 capable cards. That is exactly what the development lead for C++ AMP has done and he describes and shares that utility at this post. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Strange Happenings

    - by MOSSLover
    There are weeks we go about our life thinking nothing is going to change nothing will happen.  Then there are other weeks a billion things happen at once.  Friday started off very weird for me.  I flew into Atlanta and I met some cool people for another SharePoint event.  I had some good conversations.  Saturday then hit me and my virtual machine bombed in my presentation after the auto updater ran.  I was writing code on the board and describing everything in notepad.  I would say as presentations go it was the best and the worst presentation all wrapped into one.  The next day I was in Baltimore and I hung out with my aunt which was relatively uneventful and great.  Then Monday hit and half my presentations failed or succeeded and my screen freezes so I start describing the code.  I was on top of my game until Monday night.  On top of the world.  I'm exhausted I get into Raleigh and one of the craziest stories of my life happens.  So my boss has been renting cars through Priceline this week I got a different company than the other weeks. The company gives me a Ford Focus and I plug in the coordinates on my IPhone where I want go.  I head out and then I get to the destination hotel (or I thought I did). I go inside it's the wrong hotel the other one is a few miles away.  I walk outside hop into the car and it sounds like a gunshot.  Nothing is starting...Am I doing something wrong?  No I'm not the car is completely dead in the water.  I call the rental car facility and they tell me to call roadside they are closing for the night.  Roadside says they can't give me a new car but they can get me a jump then I have to take it up with the facility.  They send me a tow truck to give me a jump the guy can't jump the car.  He tells me this vehicle was towed about an hour ago.  He shows me a copy of a slip from when he towed it.  We also notice the rental car company left one of there price scanning guns in the vehicle.  I call up roadside and now they are interested in getting me a car because I need to be onsite tomorrow.  They get the manager of the facility on the phone he apologizes profusely and he says he'll be there in 10 minutes.  About 30 minutes pass and him plus another dude show up with a Ford Escape leather interior.  At this point I hand him the gun tell him someone left it in the vehicle and that I'm not so happy with them.  I ask them to comp my rental they can't due to Priceline, however if I call him again this week he can get me a voucher.  It's about 2 am and I'm ready to get to the hotel I don't make it in the next morning until 10 am.  I would say this was a crazy week all forms of technology are trying to tell me something.  What I have no idea, but we'll see the outcome soon.  I feel so weird tons of change is about to happen.  I don't know if it's good or bad.  I think this week is some form of omen.

    Read the article

  • Mass Metadata Updates with Folders

    - by Kyle Hatlestad
    With the release of WebCenter Content PS5, a new folder architecture called 'Framework Folders' was introduced.  This is meant to replace the folder architecture of 'Folders_g'.  While the concepts of a folder structure and access to those folders through Desktop Integration Suite remain the same, the underlying architecture of the component has been completely rewritten.  One of the main goals of the new folders is to scale better at large volumes and remove the limitations of 1000 content items or sub-folders within a folder.  Along with the new architecture, it has a new look and a few additional features have been added.  One of those features are Query Folders.  These are folders that are populated simply by a query rather then literally putting items within the folders.  This is something that the Library has provided, but it always took an administrator to define them through the Web Layout Editor.  Now users can quickly define query folders anywhere within the standard folder hierarchy.   Within this new Framework Folders is the very handy ability to do metadata updates.  It's similar to the Propagate feature in Folders_g, but there are some key differences that make this very flexible and much more powerful. It's used within regular folders and Query Folders.  So the content you're updating doesn't all have to be in the same folder...or a folder at all.   The user decides what metadata to propagate.  In Folders_g, the system administrator controls which fields will be propagated using a single administration page.  In Framework Folders, the user decides at that time which fields they want to update. You set the value you want on the propagation screen.  In Folders_g, it used the metadata defined on the parent folder to propagate.  With Framework Folders, you supply the new metadata value when you select the fields you want to update.  It does not have to be defined on the parent folder. Because of these differences, I think the new propagate method is much more useful.  Instead of always having to rely on Archiver or a custom spreadsheet, you can quickly do mass metadata updates right within folders.   Here are the basic steps to perform propagation. First create a folder for the propagation.  You can use a regular folder, but a Query Folder will work as well. Go into the folder to get the results.   In the Edit menu, select 'Propagate'. Select the check-box next to the field to update and enter the new value  Click the Propagate button. Once complete, a dialog will appear showing it is complete What's also nice is that the process happens asynchronously in the background which means you can browse to other pages and do other things while it is still working.  You aren't stuck on the page waiting for it to complete.  In addition, you can add a configuration flag to the server to turn on a status indicator icon.  Set 'FldEnableInProcessIndicator=1' and it will show a working icon as its doing the propagation. There is a caveat when using the propagation on a Query Folder.   While a propagation on a regular folder will update all of the items within that folder, a Query Folder propagation will only update the first 50 items.  So you may need to run it multiple times depending on the size...and have the query exclude the items as they get updated. One extra note...Framework Folders is offered as the default folder architecture in the PS5 release of WebCenter Content.  But if you are using WebCenter Content integrated with another product that makes use of folders (WebCenter Portal/Spaces, Fusion Applications, Primavera, etc), you'll need to continue using Folders_g until they are updated to use the new folders.

    Read the article

  • 65536% Autogrowth!

    - by Tara Kizer
    Twice a year, we move our production systems to our disaster recovery site.  Last Saturday night was one of those days.  There are about 50 SQL Server databases to be moved to the DR site, which is done via database mirroring.  It takes only a few seconds to failover, but some databases have a bit more involved work such as setting up replication.  Everything went relatively smooth, but we encountered a weird bug on our most mission critical system.  After everything was successfully failed over to the DR site, it was noticed that mirroring was in a suspended state on one of the databases.  We thought we had run into a SQL Server 2005 bug that we had been encountering and were working with Microsoft on a fix.  Microsoft did fix it in both SQL Server 2005 service pack 3 cumulative update package 13 and service pack 4 cumulative update package 2, however SP3 CU13 and SP4 both recently failed on this system so we were not patched yet with the bug fix.  As the suspended state was causing us issues with replication, we dropped mirroring.  We then noticed we had 10MB of free disk space on the mount point where the principal’s data files are stored.  I knew something went amiss as this system should have at least 150GB free on that mount point.  I immediately checked the main database’s data file and was shocked to see an autgrowth size of 65536%.  The data file autogrew right before mirroring went into the suspended state. 65536%! I didn’t have a lot of time to research if this autgrowth problem was a known SQL Server bug, so I deferred that research to today.  A quick Google search yielded no results but emphasis on “quick”.  I checked our performance system, which was recently restored with a copy of the affected production database, and found the autogrowth setting to be 512MB.  So this autogrowth bug was encountered sometime in the last two weeks.  On February 26th, we had attempted to install SQL 2005 SP4 on production, however it had failed (PSS case open with Microsoft).  I suspected that the SP4 failure was somehow related to this autgrowth bug although that turned out not to be the case. I then tweeted (@TaraKizer) about this problem to see if the SQL Server community (#sqlhelp) had any insights.  It seems several people have either heard of this bug or encountered it.  Aaron Bertrand (blog|twitter) referred me to this Connect item. Our affected database originated on SQL Server 2000 and was upgraded to SQL Server 2005 in 2007.  Back on SQL Server 2000, we were using the default file growth setting which was a percentage.  Sometime after the 2005 upgrade is when we changed it to 512MB.  Our situation seemed to fit the bug Aaron referred to me, so now the question was whether Microsoft had fixed it yet. I received a reply to my tweet from Amit Banerjee (twitter) that it had been fixed in SP3 CU1 (KB958004).  My affected system is SP3 CU8, so I was initially confused why we had encountered the bug.  Because I don’t read things fully, I had missed that there are additional steps you have to follow after applying the bug fix.  Amit set me straight.  Although you can read this information in the KB article, I will also copy it here in case you are as lazy as me and miss the most important section of it (although if you are as lazy as me, you won’t have read this far down my blog post): This hotfix will prevent only future occurrences of this problem. For example, if you restore a database from SQL Server 2000 to a SQL Server 2005 instance that contains this hotfix, this problem will not occur. However, if you already have a database that is affected by this problem, you must follow these steps to resolve this problem manually: Apply this hotfix. Set the file growth settings for the affected files to percentage settings, and then set the settings back to megabyte settings. Take the database offline, and then bring it back online. Verify that the values of the is_percent_growth column are correct in the sys.database_files system table and in the sys.master_files system table.

    Read the article

  • Eclipse no longer useful

    - by dgood1
    When I got my Eclipse from the Ubuntu Software Center, it was good and worked fine. I could work on Java projects fine. This week I was required to add ADT and tried the ADT-bundle, assuming it had everthing I needed, seeing that the SDK had more steps. So now, I can create Android apps using the ADT-bundle. I tried to work on my java projects again and I now discovered: I can't run my java projects: "The selection cannot be launched. And there are no recent launches." error. I also believe Eclipse doesn't know it's a java program because it all in black and white. Not the usual green/blue/red/black things when making comments, variables and Strings. I can't make new projects of ANYTHING unless I use the adt-bundle. New project only offers CVS (whatever that is) My perspectives seem limited. I remembered more choices and now I'm limited to [Java], Resource, CVS Repository, debug, Team Sync. I was told to be able to use perspectives to swap between Android and Java developing. Even after the ADT installation using "Install new Software",nothing. I can't uninstall/purge/remove Eclipse via the terminal. I tried removing it then reinstalling it via the Ubuntu Software Cetner. No results other than it's temporary removal. (Possibly unrelated) A large number of repositories are not found when updating Eclipse. (See Step 8 in Summary of what I did...) Although, on checking the versions and installation history, I confirmed Android and Java are installed. It probably just doesn't know it's there. Eclipse Indigo: Version: 3.7.2 Build id: I20110613-1736 Summary of what I did before and during the problem: Downloaded adt-bundle. Attempted instructions from teacher. (Install new Software) (Failed but other than an annoying "can't find repository" during each update, no damage to report) (Fixed) Ran "eclipse" executable from the adt-bundle. Updated Eclipse. (After restart, I noticed the problem) NOTE: other than window arrangement, I had no customizations. Played around with the Windowspreferences and Projectpropertied. Restored to default settings after no results. Tried "apt-get purge eclipse". Couldn't find Eclipse so, nothing happened. Used Software center. No results. Tried swapping workspaces. I tried different folder, deeper folder, renaming. All return the same problem. Deleted adt-bundle (browsed folders then delete). Got Adt-sdk only. Installed. Can't find any changes other than some disk space usage. Of course, I can't make Android apps until I unzip the bundle again. WindowsPreferencesInstall/UpdateAvailable Software Sites, Checked as many repositories as possible, then updated. Still nothing. I'm about to get a second try on uninstalling it, because I think my last action will just be taking up space. But I'll wait for tomorrow, in case the answer will help. Any thoughts?

    Read the article

  • What is the SharePoint Action Framework and Why do I need it ?

    - by SAF
    For those out there that are a little curious as to whether SAF is any use to your organisation, please read this FAQ.  What is SAF ? SAF is free to use. SAF is the "SharePoint Action Framework", it was built by myself and Hugo (plus a few others along the way). SAF is written entirely in C# code available from : http://saf.codeplex.com.   SAF is a way to automate SharePoint configuration changes. An Action is a command/class/task/script written in C# that performs a unit of execution against SharePoint such as "CreateWeb"  or "AddLookupColumn". A SAF Macro is collection of one or more Actions. SAF Macro can be run from Msbuild, a Feature, StsAdm or common plain old .Net code. Parameters can be passed to a Macro at run-time from a variety of sources such as "Environment Variable", "*.config", "Msbuild Properties", Feature Properties, command line args, .net code. SAF emits lots of trace statements at run-time, these can be viewed using "DebugView". One Action can pass parameters to another Action. Parameters can be set using Expression Syntax such as "DateTime.Now".  You should consider SAF is you suffer from one of the following symptoms... "Our developers write lots of code to deploy changes at release time - it's always rushed" "I don't want my developers shelling out to Powershell or Stsadm from a Feature". "We have loads of Console applications now, I have lost track of where they are, or what they do" "We seem to be writing similar scripts against SharePoint in lots of ways, testing is hard". "My scripts often have lots of errors - they are done at the last minute". "When something goes wrong - I have no idea what went wrong or how to solve it". "Our Features get stuck and bomb out half way through - there no way to roll them back". "We have tons of Features now - I can't keep track". "We deploy Features to run one-off tasks" "We have a library of reusable scripts, but, we can only run it in one way, sometimes we want to run it from MSbuild and a Feature". "I want to automate the deployment of changes to our development environment". "I would like to run a housekeeping task on a scheduled basis"   So I like the sound of SAF - what's the problems ?  Realistically, there are few things that need to be considered: Someone on your team will need to spend a day or 2 understanding SAF and deciding exactly how you want to use it. I would suggest a Tech Lead, SysAdm or SP Architect will need to download it, try out the examples, look through the unit tests. Ask us questions. Although, SAF can be downloaded and set to go in a few minutes, you will still need to address issues such as - "Do you want to execute your Macros in MsBuild or from a Feature ?" You will need to decide who is going to do your deployments - is it each developer to themself, or do you require a dedicated Build Manager ? As most environments (Dev, QA, Live etc) require different settings (e.g. Urls, Database names, accounts etc), you will more than likely want to define these and set a properties file up for each environment. (These can then be injected into Saf at run-time). There may be no Action to solve your particular problem. If this is the case, suggest it to us - we can try and write it, or write it yourself. It's very easy to write a new Action - we have an approach to easily unit test it, document it and author it. For example, I wrote one to deploy  a WSP in 2 hours the other day. Alternatively, Saf can also call Stsadm commands and Powershell scripts.   Anyway, I do hope this helps! If you still need help, or a quick start, we can also offer consultancy around SAF. If you want to know more give us a call or drop an email to [email protected]

    Read the article

  • Who owns the IP rights of the software without written employment contract? Employer or employee? [closed]

    - by P T
    I am a software engineer who got an idea, and developed alone an integrated ERP software solution over the past 2 years. I got the idea and coded much of the software in my personal time, utilizing my own resources, but also as intern/employee at small wholesale retailer (company A). I had a verbal agreement with the company that I could keep the IP rights to the code and the company would have the "shop rights" to use "a copy" of the software without restrictions. Part of this agreement was that I was heavily underpaid to keep the rights. Recently things started to take a down turn in the company A as the company grew fairly large and new head management was formed, also new partners were brought in. The original owners distanced themselves from the business, and the new "greedy" group indicated that they want to claim the IP rights to my software, offering me a contract that would split the IP ownership into 50% co-ownership, completely disregarding the initial verbal agreements. As of now there was no single written job description and agreement/contract/policy that I signed with the company A, I signed only I-9 and W-4 forms. I now have an opportunity to leave the company A and form a new business with 2 partners (Company B), obviously using the software as the primary tool. There would be no direct conflict of interest as the company A sells wholesale goods. My core question is: "Who owns the code without contract? Me or the company A? (in FL, US)" Detailed questions: I am familiar with the "shop rights", I don't have any problem leaving a copy of the code in the company for them to use/enhance to run their wholesale business. What worries me, Can the company A make any legal claims to the software/code/IP and potential derived profits/interests after I leave and form a company B? Can applying for a copyright of the code at http://www.copyright.gov in my name prevent any legal disputes in the future? Can I use it as evidence for legal defense? Could adding a note specifying the company A as exclusive license holder clarify the arrangements? If I leave and the company A sues me, what evidence would they use against me? On what basis would the sue since their business is in completely different industry than software (wholesale goods). Every single source file was created/stored on my personal computer with proper documentation including a copyright notice with my credentials (name/email/addres/phone). It's also worth noting that I develop significant part of the software prior to my involvement with the company A as student. If I am forced to sign a contract and the company A doesn't honor the verbal agreement, making claims towards the ownership, what can I do settle the matter legally? I like to avoid legal process altogether as my budget for court battles is extremely limited at the moment. Would altering the code beyond recognition and using it for the company B prevent the company A make any copyright claims? My common sense tells me that what I developed is by default mine in terms of IP, unless there is a signed legal agreement stating otherwise. But looking online it may be completely backwards, this really worries me. I understand that this is not legal advice, and I know to get the ultimate answer I need to hire a lawyer. I am only hoping to get some valuable input/experience/advice/opinion from those who were in similar situation or are familiar with the topic. Thank you, PT

    Read the article

  • Oracle Social Network Developer Challenge: Fishbowl Solutions

    - by Kellsey Ruppel
    Originally posted by Jake Kuramoto on The Apps Lab blog. Today, I give you the final entry in the Oracle Social Network Developer Challenge, held last week during OpenWorld. This one comes from Friend of the ‘Lab and Fishbowl Solutions (@fishbowle20) hacker, John Sim (@jrsim_uix), whom you might remember from his XBox Kinect demo at COLLABORATE 12 (presentation slides and abstract) hacks and other exploits with WebCenter. We put this challenge together specifically for developers like John, who like to experiment with new tools and push the envelope of what’s possible and build cool things, and as you can see from his entry John did just that, mashing together Google Maps and Oracle Social Network into a mobile app built with PhoneGap that uses the device’s camera and GPS to keep teams on the move in touch. He calls it a Mobile GeoTagging Solution, but I think Avengers Assemble! would have equally descriptive, given that was obviously his inspiration. Here’s his description of the mobile app: My proposed solution was to design and simplify GeoLocation mapping, and automate updates for users and teams on the move; who don’t have access to a laptop or want to take their ipads out – but allow them to make quick updates to OSN and upload photos taken from their mobile device – there and then. As part of this; the plan was to include a rules engine that could be configured by the user to allow the device to automatically update and post messages when they arrived at a set location(s). Inspiration for this came from on{x} – automate your life. Unfortunately, John didn’t make it to the conference to show off his hard work in person, but luckily, he had a colleague from Fishbowl and a video to showcase his work.    Here are some shots of John’s mobile app for your viewing pleasure: John’s thinking is sound. Geolocation is usually relegated to consumer use cases, thanks to services like foursquare, but distributed teams working on projects out in the world definitely need a way to stay in contact. Consider a construction job. Different contractors all converge on a single location, and time is money. Rather than calling or texting each other and risking a distracted driving accident, an app like John’s allows everyone on the job to see exactly where the other contractors are. Using his GPS rules, they could easily be notified about how close each is to the site, definitely useful when you have a flooring contractor sitting idle, waiting for an electrician to finish the wiring. The best part is that the project manager or general contractor could stay updated on all the action (or inaction) using Oracle Social Network, either sitting at a desk using the browser app or desktop client or on the go, using one of the native mobile apps built for Oracle Social Network. I can see this being used by insurance adjusters too, and really any team that, erm, assembles at a given spot. Of course, it’s also useful for meeting at the pub after the day’s work is done. Beyond people, this solution could also be implemented for physical objects that are in route to a destination. Say you’re a customer waiting on rail shipment or a package delivery. You could track your valuable’s whereabouts easily as they report their progress via checkins. If they deviated from the GPS rules, you’d be notified. You might even be able to get a picture into Oracle Social Network with some light hacking. Thanks to John and his colleagues at Fishbowl for participating in our challenge. We hope everyone had a good experience. Make sure to check out John’s blog post on his work and the experience using Oracle Social Network. Although this is the final, official entry we had, tomorrow, I’ll show you the work of someone who finished code, but wasn’t able to make the judging event. Stay tuned.

    Read the article

  • My Dog, Cross-Channel Shopping, and Fusion SCM

    - by Kathryn Perry
    A guest post by Mark Carson, Director, Oracle Fusion Supply Chain Management I was walking my dog Max in an open space behind my house. As we tromped through the tall weeds I remembered it is tick season and that I should get Max some protection. While he sniffed merrily in the tick infested brush, I started shopping in the middle of an open field on my phone. I thought it would be convenient to pick up the tick medicine from a pet store on the way home. Searching the pet store website I saw that they had the medicine, but there was no information on whether the store had any in stock and there were no options for shipping it to the store for pickup. I could return it, but not pick it up which seamed kind of odd. I really didn't feel like making calls to the local stores to find out if they had it. Since the product is popular, I tried one of the large 'everything' stores. Browsing its website I could see that it could be shipped to me, shipped to the store for free, and that the store nearest to me had it in stock. Needless to say, this store became a better option. This experience is a small example of why retailers, distributors, and manufactures have placed a high priority on enabling 'cross-channel commerce.' Shoppers like you and me expect to be able to search, compare, buy and return products on-line and over the phone using a variety of devices including PDAs, tablets and in-store kiosks. The pet store lost my business because its web channel had limited information about its stores. I have spoken with many customers and prospects about cross-channel commerce. They all realize the business implications and urgency behind cross-channel commerce but recognize there are challenges to enable it. New and existing applications must be integrated together globally through a consistent cross-channel business process. Integration is required between applications that provide the initial shopping experience and delivery applications associated with warehouses, stores, and partners. The enablement must be accomplished in a flexible way to react to fast-changing product portfolios and new acquisitions, while at the same time minimizing costs through reuse of existing systems. Meanwhile, the business must continue to grow and decision makers need to balance new capability with peak seasons. The challenges above are not unique to retail. Any customer in any industry who has multiple points for capturing orders and multiple points for fulfilling orders will face these challenges. With this in mind, we had a unique opportunity in Fusion SCM to re-think how to build a set of modular and flexible applications in the order management space that would make these challenges easier to conquer. The results are Fusion Distributed Order Orchestration and Global Order Promising. These applications can help companies, such as the pet store, enable true cross-channel commerce. The apps provide highly adaptable and flexible business processes to automate order orchestration across multiple cross-channel systems. They also show a global view of supply across warehouses, stores, and partners for real-time availability and more accurate order promising. Additional capability includes a standards-based integration framework for seamless execution and the ability to reuse existing systems for faster and lower cost implementations. OK, that was a mouthful of features and benefits. As Max waited to cross the street (he can do basic math too), I wondered if he could relate. He does not care about leash laws, pick-up courtesy, where he can/can't walk, what time of day it is, or even ticks. He does not care about how all these things could make walking complicated. He just wants to walk. Similarly, customers just want to shop and companies just want to make it easier to sell and deliver. You can learn more about Distributed Order Orchestration and Global Order Promising in cross-channel here.

    Read the article

  • SQL SERVER – Monitoring SQL Server Database Transaction Log Space Growth – DBCC SQLPERF(logspace) – Puzzle for You

    - by pinaldave
    First of all – if you are going to say this is very old subject, I agree this is very (very) old subject. I believe in earlier time we used to have this only option to monitor Log Space. As new version of SQL Server released we all equipped with DMV, Performance Counters, Extended Events and much more new enhancements. However, during all this year, I have always used DBCC SQLPERF(logspace) to get the details of the logs. It may be because when I started my career I remember this command and it did what I wanted all the time. Recently I have received interesting question and I thought, I should request your help. However, before I request your help, let us see traditional usage of DBCC SQLPERF(logspace). Every time I have to get the details of the log I ran following script. Additionally, I liked to store the details of the when the log file snapshot was taken as well so I can go back and know the status log file growth. This gives me a fair estimation when the log file was growing. CREATE TABLE dbo.logSpaceUsage ( id INT IDENTITY (1,1), logDate DATETIME DEFAULT GETDATE(), databaseName SYSNAME, logSize DECIMAL(18,5), logSpaceUsed DECIMAL(18,5), [status] INT ) GO INSERT INTO dbo.logSpaceUsage (databaseName, logSize, logSpaceUsed, [status]) EXEC ('DBCC SQLPERF(logspace)') GO SELECT * FROM dbo.logSpaceUsage GO I used to record the details of log file growth every hour of the day and then we used to plot charts using reporting services (and excel in much earlier times). Well, if you look at the script above it is very simple script. Now here is the puzzle for you. Puzzle 1: Write a script based on a table which gives you the time period when there was highest growth based on the data stored in the table. Puzzle 2: Write a script based on a table which gives you the amount of the log file growth from the beginning of the table to the latest recording of the data. You may have to run above script at some interval to get the various data samples of the log file to answer above puzzles. To make things simple, I am giving you sample script with expected answers listed below for both of the puzzle. Here is the sample query for puzzle: -- This is sample query for puzzle CREATE TABLE dbo.logSpaceUsage ( id INT IDENTITY (1,1), logDate DATETIME DEFAULT GETDATE(), databaseName SYSNAME, logSize DECIMAL(18,5), logSpaceUsed DECIMAL(18,5), [status] INT ) GO INSERT INTO dbo.logSpaceUsage (databaseName, logDate, logSize, logSpaceUsed, [status]) SELECT 'SampleDB1', '2012-07-01 7:00:00.000', 5, 10, 0 UNION ALL SELECT 'SampleDB1', '2012-07-01 9:00:00.000', 16, 10, 0 UNION ALL SELECT 'SampleDB1', '2012-07-01 11:00:00.000', 9, 10, 0 UNION ALL SELECT 'SampleDB1', '2012-07-01 14:00:00.000', 18, 10, 0 UNION ALL SELECT 'SampleDB3', '2012-06-01 7:00:00.000', 5, 10, 0 UNION ALL SELECT 'SampleDB3', '2012-06-04 7:00:00.000', 15, 10, 0 UNION ALL SELECT 'SampleDB3', '2012-06-09 7:00:00.000', 25, 10, 0 GO Expected Result of Puzzle 1 You will notice that there are two entries for database SampleDB3 as there were two instances of the log file grows with the same value. Expected Result of Puzzle 2 Well, please a comment with valid answer and I will post valid answers with due credit next week. Not to mention that winners will get a surprise gift from me. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: DBCC

    Read the article

  • Professional Developers, may I join you?

    - by Ben
    I currently work in technical support for a software/hardware company and for the most part it's a good job, but it's feeling more and more like I'm getting 'stuck' here. No raises in the 5 years I've been here, and lately there seems to be more hiring from the outside than promotion from within. The work I do is more technical than end-user support, as we deal primarily with our field technicians who have a little more technical skill than the general user base. As a result I get into much more technical support issues... often tracking down bugs in our software, finding performance bottlenecks in our database schema, etc. The work I'm most proud of are the development projects I've come up with on my own, and worked on during lunch breaks and slow periods in Support. Over the years I've written a number of useful utilities for the company. Diagnostic type applications that several departments use and appreciate. These include apps that simulate our various hardware devices, log file analysis, time-saving utilities for our work processes, etc. My best projects have been the hardware simulation programs, which are the type of thing we probably wouldn't have put a full-time developer on had anyone thought to do it, but they've ended up being popular and useful enough to be used by development, QA, R&D, and Support. They allow us to interface our software with simulated hardware, rather than clutter up our work areas with bulky, hard to acquire equipment. Since starting here my life has moved forward (married, kid, one more on the way), but it feels like my career has not. I still earn what I earned walking in the door my first day. Company budget is tight, bonuses have gone down, and no raises or cost of living / inflation adjustments either. As the sole source of income for my family I feel I need to do more, and I'd like to have a more active role in creating something at work, not just cleaning up other people's mistakes. I enjoy technical work, and I think development is the next logical step in my career. I'd like to bring some "legitimacy" to my part-time development work, and make myself a more skilled and valuable employee. Ultimately if this can help me better support my family, that would be ideal. Can I make the jump to professional developer? I have an engineering degree, but no formal education in computer science. I write WinForms apps using the .NET framework, do some freelance web development, have volunteered to write software for a nonprofit, and have started experimenting with programming microcontrollers. I enjoy learning new things in the limited free time I have available. I think I have the aptitude to take on a development role, even in an 'apprentice' capacity if such an option is possible. Have any of you moved into development like this? Do any of you developers have any advice or cautionary tales? Are there better career options I haven't thought of? I welcome any and all related comments and thank you in advance for posting them.

    Read the article

  • Launching Ops Center 12c

    - by user12601629
    Oracle Enterprise Manager Ops Center 12c is most ambitious version of the Ops Center tooling that we've ever released. I think that make it appropriate that we launched it in grand style! When it became clear we were going to be complete with the 12c final release about this time of year, the marketing team proposed that we roll the launch of 12c into Oracle OpenWorld Tokyo.  I thought that sounded like a fine idea!  You see, I have always loved Japan.  I even studied a bit of Japanese language back in school. OpenWorld Tokyo was an outstanding even this year.  It was held in Roppongi, one of the most stylish districts in Tokyo. And, to make things even better, the Sakura (cherry blossoms) were blooming.  If you've never been in Japan for cherry blossom season, it's a must see!  Here are a couple of pics for you. Here is a picture from Roppongi, near the conference.  Here's a picture near the Imperial Palace.  A couple of friends from the local sales team took me here before my flight out. So, now back to the product launch! We choose to launch the product in John Fowler's "Engineered Systems" keynote address.  It made perfect sense because of the close ties of Ops Center to the Systems portfolio of products.  It was a packed house for the keynote.  Here's a picture I took just before we started -- there were also hundreds more people in "overflow" rooms in other parts of the venue. Here's a picture of me on stage during the launch. While there are countless new features in Ops Center 12c that customers will love, I had to limit myself to discussing just three. Mission Critical Clouds Solaris 11 Engineered Systems So, what does Mission Critical Cloud mean?  It means we've expanded EM's cloud capabilities in a couple of key areas. First, we've expanded the "self service provisioning" capabilities we have to include SPARC -- not just x86.  Now you can build clouds of Solaris Zones with ease!  Second, we've much more deeply integrated high-end storage and network management into the cloud layers.  These may our IaaS story is now much more powerful! For Solaris 11, we didn't simply port our monitoring agent to S11.  That would have been easy, but also boring! We support S11 deeply.  Full access to the power of the IPS packaging system, the new virtualized networking stack, new Zones features, the Auto Install framework.  If you're ready to try Solaris 11 then Ops Center is ready for you. Last is on the area of Engineered Systems.  These combinations of hardware and software are fast and powerful. However, we're also on a mission to make them ever easier to manage.  We've made major strides with Ops Center 12c. Manage these systems as racks, not individual components.  The new capabilities for the new engineered systems like Exalogic and SPARC SuperCluster and striking. You can read more here: Oracle Unveils Oracle Enterprise Manager Ops Center 12c So, I'll wrap this up with one final bit of fun. One of my friends from the Oracle marketing department found a super cool place to get dinner.  It's a restaurant called Gonpachi. It turns out this is the place that inspired the scene in the Quentin Taratino movie Kill Bill where Uma Thurman fights 88 Ninjas.  Here is a picture I snapped while we were there. It was surely a good time. Check it out next time you're in Tokyo.

    Read the article

  • EF4 Code First Control Unicode and Decimal Precision, Scale with Attributes

    - by Dane Morgridge
    There are several attributes available when using code first with the Entity Framework 4 CTP5 Code First option.  When working with strings you can use [MaxLength(length)] to control the length and [Required] will work on all properties.  But there are a few things missing. By default all string will be created using unicode so you will get nvarchar instead of varchar.  You can change this using the fluent API or you can create an attribute to make the change.  If you have a lot of properties, the attribute will be much easier and require less code. You will need to add two classes to your project to create the attribute itself: 1: public class UnicodeAttribute : Attribute 2: { 3: bool _isUnicode; 4:  5: public UnicodeAttribute(bool isUnicode) 6: { 7: _isUnicode = isUnicode; 8: } 9:  10: public bool IsUnicode { get { return _isUnicode; } } 11: } 12:  13: public class UnicodeAttributeConvention : AttributeConfigurationConvention<PropertyInfo, StringPropertyConfiguration, UnicodeAttribute> 14: { 15: public override void Apply(PropertyInfo memberInfo, StringPropertyConfiguration configuration, UnicodeAttribute attribute) 16: { 17: configuration.IsUnicode = attribute.IsUnicode; 18: } 19: } The UnicodeAttribue class gives you a [Unicode] attribute that you can use on your properties and the UnicodeAttributeConvention will tell EF how to handle the attribute. You will need to add a line to the OnModelCreating method inside your context for EF to recognize the attribute: 1: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) 2: { 3: modelBuilder.Conventions.Add(new UnicodeAttributeConvention()); 4: base.OnModelCreating(modelBuilder); 5: } Once you have this done, you can use the attribute in your classes to make sure that you get database types of varchar instead of nvarchar: 1: [Unicode(false)] 2: public string Name { get; set; }   Another option that is missing is the ability to set the precision and scale on a decimal.  By default decimals get created as (18,0).  If you need decimals to be something like (9,2) then you can once again use the fluent API or create a custom attribute.  As with the unicode attribute, you will need to add two classes to your project: 1: public class DecimalPrecisionAttribute : Attribute 2: { 3: int _precision; 4: private int _scale; 5:  6: public DecimalPrecisionAttribute(int precision, int scale) 7: { 8: _precision = precision; 9: _scale = scale; 10: } 11:  12: public int Precision { get { return _precision; } } 13: public int Scale { get { return _scale; } } 14: } 15:  16: public class DecimalPrecisionAttributeConvention : AttributeConfigurationConvention<PropertyInfo, DecimalPropertyConfiguration, DecimalPrecisionAttribute> 17: { 18: public override void Apply(PropertyInfo memberInfo, DecimalPropertyConfiguration configuration, DecimalPrecisionAttribute attribute) 19: { 20: configuration.Precision = Convert.ToByte(attribute.Precision); 21: configuration.Scale = Convert.ToByte(attribute.Scale); 22:  23: } 24: } Add your line to the OnModelCreating: 1: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) 2: { 3: modelBuilder.Conventions.Add(new UnicodeAttributeConvention()); 4: modelBuilder.Conventions.Add(new DecimalPrecisionAttributeConvention()); 5: base.OnModelCreating(modelBuilder); 6: } Now you can use the following on your properties: 1: [DecimalPrecision(9,2)] 2: public decimal Cost { get; set; } Both these options use the same concepts so if there are other attributes that you want to use, you can create them quite simply.  The key to it all is the PropertyConfiguration classes.   If there is a class for the datatype, then you should be able to write an attribute to set almost everything you need.  You could also create a single attribute to encapsulate all of the possible string combinations instead of having multiple attributes on each property. All in all, I am loving code first and having attributes to control database generation instead of using the fluent API is huge and saves me a great deal of time.

    Read the article

  • Social Technology and the Potential for Organic Business Networks

    - by Michael Snow
    Guest Blog Post by:  Michael Fauscette, IDCThere has been a lot of discussion around the topic of social business, or social enterprise, over the last few years. The concept of applying emerging technologies from the social Web, combined with changes in processes and culture, has the potential to provide benefits across the enterprise over a wide range of operations impacting employees, customers, partners and suppliers. Companies are using social tools to build out enterprise social networks that provide, among other things, a people-centric collaborative and knowledge sharing work environment which over time can breakdown organizational silos. On the outside of the business, social technology is adding new ways to support customers, market to prospects and customers, and even support the sales process. We’re also seeing new ways of connecting partners to the business that increases collaboration and innovation. All of the new "connectivity" is, I think, leading businesses to a business model built around the concept of the network or ecosystem instead of the old "stand-by-yourself" approach. So, if you think about businesses as networks in the context of all of the other technical and cultural change factors that we're seeing in the new information economy, you can start to see that there’s a lot of potential for co-innovation and collaboration that was very difficult to arrange before. This networked business model, or what I've started to call “organic business networks,” is the business model of the information economy.The word “organic” could be confusing, but when I use it in this context, I’m thinking it has similar traits to organic computing. Organic computing is a computing system that is self-optimizing, self-healing, self-configuring, and self-protecting. More broadly, organic models are generally patterns and methods found in living systems used as a metaphor for non-living systems.Applying an organic model, organic business networks are networks that represent the interconnectedness of the emerging information business environment. Organic business networks connect people, data/information, content, and IT systems in a flexible, self-optimizing, self-healing, self-configuring, and self-protecting system. People are the primary nodes of the network, but the other nodes — data, content, and applications/systems — are no less important.A business built around the organic business network business model would incorporate the characteristics of a social business, but go beyond the basics—i.e., use social business as the operational paradigm, but also use organic business networks as the mode of operating the business. The two concepts complement each other: social business is the “what,” and the organic business network is the “how.”An organic business network lets the business work go outside of traditional organizational boundaries and become the continuously adapting implementation of an optimized business strategy. Value creation can move to the optimal point in the network, depending on strategic influencers such as the economy, market dynamics, customer behavior, prospect behavior, partner behavior and needs, supply-chain dynamics, predictive business outcomes, etc.An organic business network driven company is the antithesis of a hierarchical, rigid, reactive, process-constrained, and siloed organization. Instead, the business can adapt to changing conditions, leverage assets effectively, and thrive in a hyper-connected, global competitive, information-driven environment.To hear more on this topic – I’ll be presenting in the next webcast of the Oracle Social Business Thought Leader Webcast Series - “Organic Business Networks: Doing Business in a Hyper-Connected World” this coming Thursday, June 21, 2012, 10:00 AM PDT – Register here

    Read the article

  • 45 minutes to talk about C# [closed]

    - by Philip
    I have the opportunity to give a 45 minute talk on C# in the theory of programming languages class I'm taking. The college teaches Java almost exclusively, so that's what all the students are most familiar with. (There's a little C, assembly, Prolog and LISP as well.) I decide what to talk about. It seems to me the best approach is to focus on a few of the big, obvious differences between C# and Java. I don't intend it to be a recommendation to use C# -- there are reasons to use each, mostly because of their ecosystems. So I want to focus on C# as a language. I don't want to go too fast and end up listing a whole bunch of features without showing their usefulness. My current plan is this: Functions as first class objects. This is, in my opinion, one of the biggest differences between C# and Java. The professor briefly mentioned this notion and showed a LISP example, but many of the students have probably never used it. I can show real world examples where it's made my code more readable. Lambda expressions as concise syntax for anonymous functions. Obviously with examples to show how this is useful. The real hit-home examples will be at the end when it's combined with the rest. I don't see an advantage to first showing the old delegate syntax and then replacing it with lambdas -- most of us won't have ever seen delegates anyway so it would just be confusing. The yield keyword and how it's different from returning an array. I have the impression that a lot of C# developers aren't familiar with how to use this. It will likely be very foreign to Java developers. I have some examples from my own work where it was really useful, such as iterating over a tree traversal, or iterating over neighbors in a graph where the neighbors aren't stored in memory. In both cases, doing it in Java would likely mean returning a complete list -- with yield I can stop iterating if I find what I want early on, without using memory for superfluous lists or arrays. Extension methods as a way to write implementation on interfaces. We'll all be familiar with how interfaces don't allow method implementation, and how this leads to code duplication. I'll show a specific example of this and how the extension method can solve the problem. Demonstrate how the above can be combined by implementing some simple Linq methods and using them. Where, Select, First, maybe more depending on how much time is left. Ideas on which ones might 'hit home' the best? There are other things I could talk about such as generics, value types, properties and more. I haven't yet though of good ways to incorporate these. In the case of generics and value types, the advantages might not be obvious or as relevant. Properties are obviously useful, particularly since we're taught strict JavaBeans here, but I don't know if I could integrate it with the "path to Linq" discussion above without it feeling tacked on. So I'm looking for thoughts on how to talk about C#, and what to talk about. Even minor details. I'm sure there are more experienced C# developers than me here who have good insight about what's really important in the language, and what would miss the point.

    Read the article

  • OEG11gR2 integration with OES11gR2 Authorization with condition

    - by pgoutin
    Introduction This OES use-case has been defined originally by Subbu Devulapalli (http://accessmanagement.wordpress.com/).  Based on this OES museum use-case, I have developed the OEG11gR2 policy able to deal with the OES authorization with condition. From an OEG point of view, the way to deal with OES condition is to provide with the OES request some Environmental / Context Attributes.   Museum Use-Case  All painting in the museum have security sensors, an alarm goes off when a person comes too close a painting. The employee designated for maintenance needs to use their ID and disable the alarm before maintenance. You are the Security Administrator for the museum and you have been tasked with creating authorization policies to manage authorization for different paintings. Your first task is to understand how paintings are organized. Asking around, you are surprised to see that there isno formal process in place, so you need to start from scratch. the museum tracks the following attributes for each painting 1. Name of the work 2. Painter 3. Condition (good/poor) 4. Cost You compile the list of paintings  Name of Painting  Painter  Paint Condition  Cost  Mona Lisa  Leonardo da Vinci  Good  100  Magi  Leonardo da Vinci  Poor  40  Starry Night  Vincent Van Gogh  Poor  75  Still Life  Vincent Van Gogh  Good  25 Being a software geek who doesn’t (yet) understand art, you feel that price(or insurance price) of a painting is the most important criteria. So you feel that based on years-of-experience employees can be tasked with maintaining different paintings. You decide that paintings worth over 50 cost should be only handled by employees with over 20 years of experience and employees with less than 10 years of experience should not handle any painting. Lets us start with policy modeling. All paintings have a common set of attributes and actions, so it will be good to have them under a single Resource Type. Based on this resource type we will create the actual resources. So our high level model is: 1) Resource Type: Painting which has action manage and the following four attributes a) Name of the work b) Painter c) Condition (good/poor) d) Cost 2) To keep things simple lets use painting name for Resource name (in real world you will try to use some identifier which is unique, because in future we may end up with more than one painting which has the same name.) 3) Create Resources based on the previous table 4) Create an identity attribute Experience (Integer) 5) Create the following authorization policies a) Allow employees with over 20 years experience to access all paintings b) Allow employees with 10 – 20 years of experience to access painting which cost less than 50 c) Deny access to all paintings for employees with less than 10 year of experience OES Authorization Configuration We do need to create 2 authorization policies with specific conditions a) Allow employees with over 20 years experience to access all paintings b) Allow employees with 10 – 20 years of experience to access painting which cost less than 50 c) Deny access to all paintings for employees with less than 10 year of experience We don’t need an explicit policy for Deny access to all paintings for employees with less than 10 year of experience, because Oracle Entitlements Server will automatically deny if there is no matching policy. OEG Policy The OEG policy looks like the following The 11g Authorization filter configuration is similar to :  The ${PAINTING_NAME} and ${USER_EXPERIENCE} variables are initialized by the "Retrieve from the HTTP header" filters for testing purpose. That's to say, under Service Explorer, we need to provide 2 attributes "Experience" & "Painting" following the OES 11g Authorization filter described above.

    Read the article

  • How does interpolation actually work to smooth out an object's movement?

    - by user22241
    I've asked a few similar questions over the past 8 months or so with no real joy, so I am going make the question more general. I have an Android game which is OpenGL ES 2.0. within it I have the following Game Loop: My loop works on a fixed time step principle (dt = 1 / ticksPerSecond) loops=0; while(System.currentTimeMillis() > nextGameTick && loops < maxFrameskip){ updateLogic(dt); nextGameTick+=skipTicks; timeCorrection += (1000d/ticksPerSecond) % 1; nextGameTick+=timeCorrection; timeCorrection %=1; loops++; } render(); My intergration works like this: sprite.posX+=sprite.xVel*dt; sprite.posXDrawAt=sprite.posX*width; Now, everything works pretty much as I would like. I can specify that I would like an object to move across a certain distance (screen width say) in 2.5 seconds and it will do just that. Also because of the frame skipping that I allow in my game loop, I can do this on pretty much any device and it will always take 2.5 seconds. Problem However, the problem is that when a render frame skips, the graphic stutter. It's extremely annoying. If I remove the ability to skip frames, then everything is smooth as you like, but will run at different speeds on different devices. So it's not an option. I'm still not sure why the frame skips, but I would like to point out that this is Nothing to do with poor performance, I've taken the code right back to 1 tiny sprite and no logic (apart from the logic required to move the sprite) and I still get skipped frames. And this is on a Google Nexus 10 tablet (and as mentioned above, I need frame skipping to keep the speed consistent across devices anyway). So, the only other option I have is to use interpolation (or extrapolation), I've read every article there is out there but none have really helped me to understand how it works and all of my attempted implementations have failed. Using one method I was able to get things moving smoothly but it was unworkable because it messed up my collision. I can foresee the same issue with any similar method because the interpolation is passed to (and acted upon within) the rendering method - at render time. So if Collision corrects position (character now standing right next to wall), then the renderer can alter it's position and draw it in the wall. So I'm really confused. People have said that you should never alter an object's position from within the rendering method, but all of the examples online show this. So I'm asking for a push in the right direction, please do not link to the popular game loop articles (deWitters, Fix your timestep, etc) as I've read these multiple times. I'm not asking anyone to write my code for me. Just explain please in simple terms how Interpolation actually works with some examples. I will then go and try to integrate any ideas into my code and will ask more specific questions if need-be further down the line. (I'm sure this is a problem many people struggle with).

    Read the article

  • What information must never appear in logs?

    - by MainMa
    I'm about to write the company guidelines about what must never appear in logs (trace of an application). In fact, some developers try to include as many information as possible in trace, making it risky to store those logs, and extremely dangerous to submit them, especially when the customer doesn't know this information is stored, because she never cared about this and never read documentation and/or warning messages. For example, when dealing with files, some developers are tempted to trace the names of the files. For example before appending file name to a directory, if we trace everything on error, it will be easy to notice for example that the appended name is too long, and that the bug in the code was to forget to check for the length of the concatenated string. It is helpful, but this is sensitive data, and must never appear in logs. In the same way: Passwords, IP addresses and network information (MAC address, host name, etc.)¹, Database accesses, Direct input from user and stored business data must never appear in trace. So what other types of information must be banished from the logs? Are there any guidelines already written which I can use? ¹ Obviously, I'm not talking about things as IIS or Apache logs. What I'm talking about is the sort of information which is collected with the only intent to debug the application itself, not to trace the activity of untrusted entities. Edit: Thank you for your answers and your comments. Since my question is not too precise, I'll try to answer the questions asked in the comments: What I'm doing with the logs? The logs of the application may be stored in memory, which means either in plain on hard disk on localhost, in a database, again in plain, or in Windows Events. In every case, the concern is that those sources may not be safe enough. For example, when a customer runs an application and this application stores logs in plain text file in temp directory, anybody who has a physical access to the PC can read those logs. The logs of the application may also be sent through internet. For example, if a customer has an issue with an application, we can ask her to run this application in full-trace mode and to send us the log file. Also, some application may sent automatically the crash report to us (and even if there are warnings about sensitive data, in most cases customers don't read them). Am I talking about specific fields? No. I'm working on general business applications only, so the only sensitive data is business data. There is nothing related to health or other fields covered by specific regulations. But thank you to talk about that, I probably should take a look about those fields for some clues about what I can include in guidelines. Isn't it easier to encrypt the data? No. It would make every application much more difficult, especially if we want to use C# diagnostics and TraceSource. It would also require to manage authorizations, which is not the easiest think to do. Finally, if we are talking about the logs submitted to us from a customer, we must be able to read the logs, but without having access to sensitive data. So technically, it's easier to never include sensitive information in logs at all and to never care about how and where those logs are stored.

    Read the article

< Previous Page | 729 730 731 732 733 734 735 736 737 738 739 740  | Next Page >