Search Results

Search found 16329 results on 654 pages for 'b long'.

Page 436/654 | < Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >

  • Encrypting your SQL Server Passwords in Powershell

    - by laerte
    A couple of months ago, a friend of mine who is now bewitched by the seemingly supernatural abilities of Powershell (+1 for the team) asked me what, initially, appeared to be a trivial question: "Laerte, I do not have the luxury of being able to work with my SQL servers through Windows Authentication, and I need a way to automatically pass my username and password. How would you suggest I do this?" Given that I knew he, like me, was using the SQLPSX modules (an open source project created by Chad Miller; a fantastic library of reusable functions and PowerShell scripts), I merrily replied, "Simply pass the Username and Password in SQLPSX functions". He rather pointed responded: "My friend, I might as well pass: Username-'Me'-password 'NowEverybodyKnowsMyPassword'" As I do have the pleasure of working with Windows Authentication, I had not really thought this situation though yet (and thank goodness I only revealed my temporary ignorance to a friend, and the embarrassment was minimized). After discussing this puzzle with Chad Miller, he showed me some code for saving passwords on SQL Server Tables, which he had demo'd in his Powershell ETL session at Tampa SQL Saturday (and you can download the scripts from here). The solution seemed to be pretty much ready to go, so I showed it to my Authentication-impoverished friend, only to discover that we were only half-way there: "That's almost what I want, but the details need to be stored in my local txt file, together with the names of the servers that I'll actually use the Powershell scripts on. Something like: Server1,UserName,Password Server2,UserName,Password" I thought about it for just a few milliseconds (Ha! Of course I'm not telling you how long it actually took me, I have to do my own marketing, after all) and the solution was finally ready. First , we have to download Library-StringCripto (with many thanks to Steven Hystad), which is composed of two functions: One for encryption and other for decryption, both of which are used to manage the password. If you want to know more about the library, you can see more details in the help functions. Next, we have to create a txt file with your encrypted passwords:$ServerName = "Server1" $UserName = "Login1" $Password = "Senha1" $PasswordToEncrypt = "YourPassword" $UserNameEncrypt = Write-EncryptedString -inputstring $UserName -Password $PasswordToEncrypt $PasswordEncrypt = Write-EncryptedString -inputstring $Password -Password $PasswordToEncrypt "$($Servername),$($UserNameEncrypt),$($PasswordEncrypt)" | Out-File c:\temp\ServersSecurePassword.txt -Append $ServerName = "Server2" $UserName = "Login2" $Password = "senha2" $PasswordToEncrypt = "YourPassword" $UserNameEncrypt = Write-EncryptedString -inputstring $UserName -Password $PasswordToEncrypt $PasswordEncrypt = Write-EncryptedString -inputstring $Password -Password $PasswordToEncrypt "$($Servername),$($UserNameEncrypt),$($PasswordEncrypt)" | Out-File c:\temp\ ServersSecurePassword.txt -Append .And in the c:\temp\ServersSecurePassword.txt file which we've just created, you will find your Username and Password, all neatly encrypted. Let's take a look at what the txt looks like: .and in case you're wondering, Server names, Usernames and Passwords are all separated by commas. Decryption is actually much more simple:Read-EncryptedString -InputString $EncryptString -password "YourPassword" (Just remember that the Password you're trying to decrypt must be exactly the same as the encrypted phrase.) Finally, just to show you how smooth this solution is, let's say I want to use the Invoke-DBMaint function from SQLPSX to perform a checkdb on a system database: it's just a case of split, decrypt and be happy!Get-Content c:\temp\ServerSecurePassword.txt | foreach { [array] $Split = ($_).split(",") Invoke-DBMaint -server $($Split[0]) -UserName (Read-EncryptedString -InputString $Split[1] -password "YourPassword" ) -Password (Read-EncryptedString -InputString $Split[2] -password "YourPassword" ) -Databases "SYSTEM" -Action "CHECK_DB" -ReportOn c:\Temp } This is why I love Powershell.

    Read the article

  • The Numbers of Customer Experience

    - by Christie Flanagan
    This week, we’ll be continuing our conversations about Customer Experience (CX) on the Oracle WebCenter blog.  While we all know that customer experience is critically important for acquiring new customers and engendering long term brand loyalty, I thought we could kick this week off by taking a look at the numbers of customer experience.   I’m sure you’ll agree that nothing quite puts things into perspective like numbers and figures. A whopping 86% of consumers say that they are willing to pay more for a better customer experience.  But many companies are failing to step up to the challenge.  And when companies fail deliver on customer experience expectations, they leave money on the table. A huge percentage of customers, 89%, begin doing business with a competitor following a poor customer experience. Breaking up isn’t hard to do and today’s empowered customers have no qualms about taking their business elsewhere when their expectations for customer experience are not met. Over a quarter of consumers, 26%, posted a negative comment on a social networking site like Facebook or Twitter following a poor customer experience. Today, individual customer service failures have the ability to easily snowball.  An unsatisfied customer has the ability to easily share their rancor with their entire social network and chip away at your brand’s reputation. A large number of consumers, 79%,  who shared complaints about poor customer experience online had their complaints ignored.  Companies ignore customer complaints at their own peril.  And unsatisfied customers, when handled effectively, have the potential to become advocates for your brand.  Of the 21% of consumers who did get responses to complaints, more than half had positive reactions to the same company about which they were previously complaining. Half of consumers will give a brand only a week to respond to a question before they stop doing business with them.  The clock is ticking when customers have questions about your brand and a week is an eternity in the realm of customer experience.  The source for these stats is the 2011 Customer Experience Impact (CEI) Report, which explores the relationship between consumers and brands.  The report is based on a survey commissioned by RightNow (acquired by Oracle in 2012) and conducted by Harris Interactive. If you’re interested in seeing more facts and figures about customer experience, download the full report.

    Read the article

  • Removing Menu Items from Window Tabs

    - by Geertjan
    So you're working on your NetBeans Platform application and you notice that when you right-click on tabs in the predefined windows, e.g., the Projects window, you see a long list of popup menus. For whatever the reason is, you decide you don't want those popup menus. You right-click the application and go to the Branding dialog. There you uncheck the checkboxes that are unchecked below: As you can see above, you've removed three features, all of them related to closing the windows in your application. Therefore, "Close" and "Close Group" are now gone from the list of popup menus: But that's not enough. You also don't want the popup menus that relate to maximizing and minimizing the predefined windows, so you uncheck those checkboxes that relate to that: And, hey, now they're gone too: Next, you decide to remove the feature for floating, i.e., undocking the windows from the main window: And now they're gone too: However, even when you uncheck all the remaining checkboxes, as shown here... You're still left with those last few pesky popup menu items that just will not go away no matter what you do: The reason for the above? Those actions are hardcoded into the action list, which is a bug. Until it is fixed, here's a handy workaround: Set an implementation dependency on "Core - Windows" (core.window). That is, set a dependency and then specify that it is an implementation dependency, i.e., that you'll be using an internal class, not one of the official APIs. In one of your existing modules, or in a new one, make sure you have (in addition to the above) a dependency on Lookup API and Window System API. And then, add the class below to the module: import javax.swing.Action; import org.netbeans.core.windows.actions.ActionsFactory; import org.openide.util.lookup.ServiceProvider; import org.openide.windows.Mode; import org.openide.windows.TopComponent; @ServiceProvider(service = ActionsFactory.class) public class EmptyActionsFactory extends ActionsFactory { @Override public Action[] createPopupActions(TopComponent tc, Action[] actions) { return new Action[]{}; } @Override public Action[] createPopupActions(Mode mode, Action[] actions) { return new Action[]{}; } } Hurray. Farewell to superfluous popup menu items on your window tabs. In the screenshot below, the tab of the Projects window is being right-clicked and no popup menu items are shown, which is true for all the other windows, those that are predefined as well as those that you add afterwards:

    Read the article

  • viable part-time career in IT/programming?

    - by Rider
    Hi, I'd like to ask for some career advice from you people. Is there a viable job/career that can be done in programming/IT for the long term? Right now, I am thinking about website (PHP?) developer path. My background: I have a degree in computer science and have been a programmer/system analyst for almost 10 years. Lately I took a big break from programming and studied for a B.arch. degree (yes architecture), only to discover that architecture offers zero (0) jobs where I'm from, for 3 years already (and no, I am not going to move and the grass in not greener in other places). I have never been particularly interested in programming, in fact I was bored by it. But I was always quite good at both programming and system analysis, and very valued by practically all my employers. On the other hand, I have never been valued or offered a good job in any other field (although I can do many things, like design, architecture, translations, documentation, teaching, etc etc.) I guess the human component has been always more important for me in programming jobs - I value all the good people I worked with, but not projects. However, I have about zero skills or desire to be a project manager. I also have close to zero skills for selling myself. I like it best when I can do "my thing", have my niche, have an ownership of some project. Right now my career perspective is to do part time programming and to part time teach yoga. I have already started the yoga teaching part. Do you think that part time programming is viable? And what niche works best for that? I have considered web development, QA, or software development in a company like I did before. However, my fear is that when you do programming part-time, you get the most boring coding work, only to see your colleagues move to more interesting projects and up their respective career ladders. I also fear that part-timers are not especially needed either. And, since I don't share much enthusiasm at programming, I'd rather not be around young programmers boiling with geeky enthusiasm about coding, but rather QA mindset with people from different backgrounds and life paths might work better for me. Thanks for any advice, --Rider

    Read the article

  • Commerce Anywhere...Where the Web, Store, Mobile, Social and Call Center Come Together

    - by divya.malik
    I am pleased to introduce guest blogger, Bill Zujewski today. Bill has just joined the Oracle CRM Product Marketing team as part of our recent ATG acquisition. Based in Cambridge, MA Bill was the VP of Product Marketing for ATG and collaborated on eCommerce strategy with some of the best brands in the world. Welcome Bill!! BY BILL ZUJEWSKI "Times are a changing"...or so the song goes. Not long ago, eCommerce just meant having a cool brand and a slick website. Today, customers expect much more... what I think they really want...Commerce Anywhere...a seamless, consistent and personal way to interact or transact business with you and your products, whether they start on the web, go into a store, talk over the phone, access products via their mobile device or on their favorite social media site. They want one more thing... for you to remember them and their history with you... so they can be treated more intelligently and not have to repeat previous interactions. It makes sense to me, I want it too... it saves me time and money. I work with many companies that are trying to understand how to evolve their business structure and technology solutions to meet the challenges of Commerce Anywhere. My advice ... think differently and take a more holistic approach to the customer experience and the cross-channel selling solution. Stop integrating siloed legacy systems and start thinking about a single platform as your new foundation... the e-Commerce platform. I recently wrote a new white paper, Commerce Anywhere - A Business and Technology ! Strategy to Maximize Cross- channel Commerce Growth to help our customers better understand how to create that "Commerce Anywhere" customer experience that customers really want. The paper offers practical insights into an IT transformation that can help you leverage a commerce platform to go beyond the web store front and instead use it to enable rapid expansion into mobile apps, new in-store apps, and interact with your customers through social commerce. Let me know what you think by posting a comment on this blog.

    Read the article

  • My program at #MIX10

    - by Laurent Bugnion
    Getting ready to fly to Vegas and MIX10 is really an exciting time! It is also a very busy time, because we are working on a few projects that will be shown on stage, I have my presentation to prepare, and of course as always the book… though these days it has been a bit on the back burner to be honest ;) I arrive in Vegas on Sunday evening around 10PM, so I won’t be able to make it to the traditional IdentityMine dinner this year. I am sure it will be fun nonetheless! My session: Understanding the MVVM pattern http://live.visitmix.com/MIX10/Sessions/EX14 My session is scheduled on the first day, which is awesome, so I am crossing my fingers and hoping that the MIX team doesn’t change it at the last minute… The session will take place on Monday, the 15th of March, 2PM, Room Lagoon F Important: remember that the USA are moving to Summer time on Sunday, so don’t forget to adjust your watches!! Ask the Experts On Monday evening, I will attend the Ask the Experts event, which is taking place between 5Pm and 6:30PM in the main meal hall. This will be a great occasion to grab a beer and talk about code. The Commons MIX has a great place called the Commons, a great location to chill between sessions, and meet tons of interesting people. I love the Commons and plan to spend a lot of time there to meet as many people as I can. Parties I was invited to a few parties, and will do my best to avoid conflicts :) I plan to be at the following events: Silverlight Mixers on Monday evening Insiders MIX party on Tuesday Silverlight partner happy hour on Tuesday too This is a lot of fun, but at the same time we all know that the best value of a conference is to meet people face to face. This is just the right occasion.  And on Thursday… On Thursday I will be attending a Silverlight event at the Luxor. It will be a very busy day, perfect way to end the conference. I fly back home on Friday morning, but due to a long stop in Washington DC (where I intend to go downtown and take pictures… except if the weather is bad, in which case I will probably go to the museum of flight), I will reach home only on Sunday. Getting hold of me The best way to reach me during MIX is to send me a message on Twitter. I will regularly tweet my location at the conference, so make sure to come and meet me. I am eager to make new friends, to talk about the fantastic jobs we did in WPF and Silverlight over the past year and hear your war stories! http://www.twitter.com/lbugnion   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • What should filenames and URLs of images contain for SEO benefit?

    - by Baumr
    We know that good site architecture usually looks like this: example-company.com/ example-company.com/about/ example-company.com/contact/ example-company.com/products/ example-company.com/products/category/ example-company.com/products/category/productname/ Now, when it comes to Google Image search, it is clear that the img alt tag, filename/URL, and surrounding text (captions, headings, paragraphs) have an effect on ranking. I want to ask about the filename of the images that we should use (e.g. product-photo.jpg). ...but first about the URL: Often web developers stick all images in a single folder in the root: example-company.com/img/ — and I have stopped doing that. (I don't want to get into it, but basically, it seems more semantic for images which make up part of the content at each sub-directory) However, when all images appear in a folder, I feel that their filename needs to reflect what they are a bit more than usual, for example: example-company.com/img/example-company-productname-category.jpg It's a longer filename than just product.png, but as long as it's relevant, I see no problem with regards to SEO (unless you're keyword stuffing), and it could even help rank for keywords: "example company" "productname" "category" So no questions there. But what about when we have places images in the site architecture we outlined at the beginning? In other words, what if image URL paths look like this: example-company.com/products/category/productname/productname.jpg My question is, should the URL be kept short like above and only have the "productname" (and some descriptive keywords) as part of it's filename? Or, should it also include the "example-company" and "category"? Like so: example-company.com/products/category/productname/example-company-category-productname.jpg That seems much longer, and redundant when we look at the URL, but here are a few considerations. Images are often downloaded onto computers, and, to the average user, they lose their original URL and thus — it isn't clear where they came from. Also, some social networks, forums, and other platforms leave the filename intact when uploaded. (Many others rewrite it, for example, Pinterest and Facebook.) Another consideration, will this really help (even if ever so slightly) rank in Google Image Search, or at least inform Google that the product is something specific to the "example-company"? For example, what if this product can only be bought at this store and is the flagship product? In addition to an abundance of internal links to this product page, would having the "example company" name and "category" help it appear in "example company" searches? In other words, is less more?

    Read the article

  • How to get tens of millions of pages indexed by Google bot?

    - by Chris Adragna
    We are currently developing a site that currently has 8 million unique pages that will grow to about 20 million right away, and eventually to about 50 million or more. Before you criticize... Yes, it provides unique, useful content. We continually process raw data from public records and by doing some data scrubbing, entity rollups, and relationship mapping, we've been able to generate quality content, developing a site that's quite useful and also unique, in part due to the breadth of the data. It's PR is 0 (new domain, no links), and we're getting spidered at a rate of about 500 pages per day, putting us at about 30,000 pages indexed thus far. At this rate, it would take over 400 years to index all of our data. I have two questions: Is the rate of the indexing directly correlated to PR, and by that I mean is it correlated enough that by purchasing an old domain with good PR will get us to a workable indexing rate (in the neighborhood of 100,000 pages per day). Are there any SEO consultants who specialize in aiding the indexing process itself. We're otherwise doing very well with SEO, on-page especially, besides, the competition for our "long-tail" keyword phrases is pretty low, so our success hinges mostly on the number of pages indexed. Our main competitor has achieved approx 20MM pages indexed in just over one year's time, along with an Alexa 2000-ish ranking. Noteworthy qualities we have in place: page download speed is pretty good (250-500 ms) no errors (no 404 or 500 errors when getting spidered) we use Google webmaster tools and login daily friendly URLs in place I'm afraid to submit sitemaps. Some SEO community postings suggest a new site with millions of pages and no PR is suspicious. There is a Google video of Matt Cutts speaking of a staged on-boarding of large sites, too, in order to avoid increased scrutiny (at approx 2:30 in the video). Clickable site links deliver all pages, no more than four pages deep and typically no more than 250(-ish) internal links on a page. Anchor text for internal links is logical and adds relevance hierarchically to the data on the detail pages. We had previously set the crawl rate to the highest on webmaster tools (only about a page every two seconds, max). I recently turned it back to "let Google decide" which is what is advised.

    Read the article

  • ArchBeat Link-o-Rama for 2012-10-11

    - by Bob Rhubart
    Whiteboards, not red carpets. OTN Architect Day Los Angeles. Oct 25. Free event. Yes, it's TinselTown, but the stars at this event are experts in the use of Oracle technologies in today's architectures. This free event includes a full slate of technical sessions and peer interaction covering cloud computing, SOA, and engineered systems—and lunch is on us. Register now. Thursday October 25, 2012, 8:00 a.m. – 5:00 p.m. Sofitel Los Angeles, 8555 Beverly Boulevard, Los Angeles, CA 90048 JDeveloper extensions where? | Peter Paul van de Beek "Where does the downloaded stuff go after you installed JDeveloper extensions, like SOA Composite Editor, Oracle BPM Studio, or AIA Service Constructor?" Peter Paul van de Beek has the answer. Using Apache Derby Database with WebLogic (the express way) | Frank Munz Another technical how-to video from Dr. Frank Munz. Compensation Hello World | Ronald van Luttikhuizen Oracle ACE Director Ronald van Luttikhuizen's post addresses several question that came up during the "Effective Fault Handling in SOA Suite 11g" session that he and fellow Oracle ACE Guido Schmutz presented at Oracle OpenWorld. Oracle Fusion Middleware Security: OAM and OIM 11g Academies Looking for technical how-to content covering Oracle Access Manager and Oracle Identity Manager? The people behind the Oracle Middleware Security blog have indexed relevant blog posts into what they call "Academies." "These indexes," the blog explains, "contain the articles we've written that we believe provide long lasting guidance on OAM and OIM. Posts covered in these series include articles on key aspects of OAM and OIM 11g, best practice architectural guidance, integrations, and customizations." Maximum Availability Whitepaper for IDM 11gR2 | Oracle Fusion Middleware Security The Oracle Fusion Middelware A-Team shares an overview of and a link to a new white paper: "Identity Management 11.1.2 Enterprise Deployment Blueprint." Thought for the Day "The trouble with the world is that the stupid are sure and the intelligent are full of doubt." — Bertrand Russell (May 18, 1872 – February 2, 1970) Source: SoftwareQuotes.com

    Read the article

  • Stop Saying "Multi-Channel!"

    - by David Dorf
    I keep hearing the term "multi-channel" in our industry, but its time to move on. It kinda reminds me of the term "ECR" or electronic cash register. Long ago ECR was a leading-edge term, but nowadays its rarely used because its table-stakes. After all, what cash register today isn't electronic? The same logic applies to multi-channel, at least when we're talking about tier-1 and tier-2 retailers. If you're still talking about multi-channel retailing, you're in big trouble. Some have switched over to the term "cross-channel," and that's a step in the right direction but still falls short. Its kinda like saying, "I upgraded my ECR to accept debit cards!" Yawn. Who hasn't? Today's retailers need to focus on omni-channel, which I first heard from my friends over at RSR but was originally coined at IDC. First retailers added e-commerce to their store and catalog channels yielding multi-channel retailing. Consumers could use the channel that worked best for them. Then some consumers wanted to combine channels with features like buy-on-the-Web, pickup-in-the-store. Thus began the cross-channel initiatives to breakdown the silos and enable the channels to communicate with each other. But the multi-channel architecture is full of duplication that thwarts efforts of providing a consistent experience. Each has its own cart, its own pricing, and often its own CRM. This was an outcrop of trying to bring the independent channels to market quickly. Rather than reusing and rebuilding existing components to meet the new demands, silos were created that continue to exist today. Today's consumers want omni-channel retailing. They want to interact with brands in a consistent manner that is channel transparent, yet optimized for that particular interaction. The diagram below, from the soon-to-be-released NRF Mobile Blueprint v2, shows this progression. For retailers to provide an omni-channel experience, there needs to be one logical representation of products, prices, promotions, and customers across all channels. The only thing that varies is the presentation of the content based on the delivery mechanism (e.g. shelf labels, mobile phone, web site, print, etc.) and often these mechanisms can be combined in various ways. I'm looking forward to the day in which I can use my phone to scan QR-codes in a catalog to create a shopping cart of items. Then do some further research on the retailer's Web site and be told about related items that might interest me. Be able to easily solicit opinions and reviews from social sites, and finally enter the store to pickup my items, knowing that any applicable coupons have been applied. In this scenario, I the consumer are dealing with a single brand that is aware of me and my needs throughout the entire transaction. Nirvana.

    Read the article

  • What is the correct way to use g_signal_connect() in C++ for dynamic unity quicklists?

    - by hakermania
    I want to make my application use dynamic unity quicklists. For building my application I am using C++ and the QtCreator IDE. When a menu action is triggered I want to be able to have access to a non-static function of my MainWindow class so as to be able to update the Graphical User Interface which can be accessed from inside 'normal' MainWindow's functions. So, I am building up my quicklist like this (mainwindow.cpp): void MainWindow::enable_unity_quicklist(){ Unity_Menu = dbusmenu_menuitem_new(); dbusmenu_menuitem_property_set_bool (Unity_Menu, DBUSMENU_MENUITEM_PROP_VISIBLE, FALSE); Unity_Stop = dbusmenu_menuitem_new(); dbusmenu_menuitem_property_set(Unity_Stop, DBUSMENU_MENUITEM_PROP_LABEL, "Stop"); dbusmenu_menuitem_child_append (Unity_Menu, Unity_Stop); g_signal_connect (Unity_Stop, DBUSMENU_MENUITEM_SIGNAL_ITEM_ACTIVATED, G_CALLBACK(&fake_callback), (gpointer)this); if(!unity_entry) unity_entry = unity_launcher_entry_get_for_desktop_id("myapp.desktop"); unity_launcher_entry_set_quicklist(unity_entry, Unity_Menu); dbusmenu_menuitem_property_set_bool(Unity_Menu, DBUSMENU_MENUITEM_PROP_VISIBLE, true); dbusmenu_menuitem_property_set_bool(Unity_Stop, DBUSMENU_MENUITEM_PROP_VISIBLE, true); } void MainWindow::fake_callback(gpointer data){ MainWindow* m = (MainWindow*)data; m->on_stopButton_clicked(); } void MainWindow::on_stopButton_clicked(){ //stopping the process... } mainwindow.h: private slots: void enable_unity_quicklist(); void on_stopButton_clicked(); public slots: static void fake_callback(gpointer data); This suggestion was taken from http://old.nabble.com/Using-g_signal_connect-in-class-td18461823.html The program crashes immediately after I choose the 'Stop' action from the Unity Quicklist. Debugging the program shows that I am not able to access anything MainWindow related inside the on_stopButton_clicked() without crashing. For example, it crashes when doing this check (which is the first 2 lines of code inside this function): if (!ui->stopButton->isEnabled()) return; I have also tested lots of other things that I found at the internet, but nothing of them worked. One interesting solution would be to use gtkmm (http://developer.gnome.org/gtkmm-tutorial/stable/sec-connecting-signal-handlers.html.en) but I am not used at all working on GTK applications (I work solely in Qt) and I don't know if this even suits to my occasion. A compilable example indicating what the problem is can be found at: http://ubuntuone.com/7iKA3wnPmWVp8YNNDLlVQI (3.2Kb) If you are not familiar with the QtCreator IDE, you can compile with the following commands, as long as you have all the needed libraries: cd dynamic_unity_quicklists_test; qmake -project; qmake; make

    Read the article

  • How can a solo programmer become a good team player?

    - by Nick
    I've been programming (obsessively) since I was 12. I am fairly knowledgeable across the spectrum of languages out there, from assembly, to C++, to Javascript, to Haskell, Lisp, and Qi. But all of my projects have been by myself. I got my degree in chemical engineering, not CS or computer engineering, but for the first time this fall I'll be working on a large programming project with other people, and I have no clue how to prepare. I've been using Windows all of my life, but this project is going to be very unix-y, so I purchased a Mac recently in the hopes of familiarizing myself with the environment. I was fortunate to participate in a hackathon with some friends this past year -- both CS majors -- and excitingly enough, we won. But I realized as I worked with them that their workflow was very different from mine. They used Git for version control. I had never used it at the time, but I've since learned all that I can about it. They also used a lot of frameworks and libraries. I had to learn what Rails was pretty much overnight for the hackathon (on the other hand, they didn't know what lexical scoping or closures were). All of our code worked well, but they didn't understand mine, and I didn't understand theirs. I hear references to things that real programmers do on a daily basis -- unit testing, code reviews, but I only have the vaguest sense of what these are. I normally don't have many bugs in my little projects, so I have never needed a bug tracking system or tests for them. And the last thing is that it takes me a long time to understand other people's code. Variable naming conventions (that vary with each new language) are difficult (__mzkwpSomRidicAbbrev), and I find the loose coupling difficult. That's not to say I don't loosely couple things -- I think I'm quite good at it for my own work, but when I download something like the Linux kernel or the Chromium source code to look at it, I spend hours trying to figure out how all of these oddly named directories and files connect. It's a programming sin to reinvent the wheel, but I often find it's just quicker to write up the functionality myself than to spend hours dissecting some library. Obviously, people who do this for a living don't have these problems, and I'll need to get to that point myself. Question: What are some steps that I can take to begin "integrating" with everyone else? Thanks!

    Read the article

  • Oracle GoldenGate 11gR2 New Feature: Integrated Capture

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} With the release of Oracle GoldenGate 11gR2, the Product Management team is very excited about the addition of Integrated Capture for the Oracle platform. Integrated capture is unique in the industry and unique to the Oracle database. It is not available on any other database platform. This new feature moves GoldenGate’s capture capabilities closer to the Oracle Database engine and is the foundation for Oracle GoldenGate on the Oracle Database platform over the long term. It is important to note that Integrated Capture does not replace our classic Capture process. Both are available on the Oracle Database platform. The Integrated Capture mechanism relies on Oracle’s internal log parsing and processing to capture DML transactions. By moving closer to the Oracle Database engine, Oracle GoldenGate can take advantage of new Oracle Database features and functionality more quickly. For example, this new mechanism allows GoldenGate to support advanced features such as compression. Integrated Capture provides support for all flavors of Oracle compression, including hybrid columnar compression (EHCC) on Exadata, where as our “Classic” capture would not. Integrated Capture supports two different deployment configurations; On-Source and Downstream. The on-source deployment model is what most customers are familiar with. Oracle GoldenGate is executing on the database server capturing changes in real time. This is the default deployment method. The other option is downstream, where the source database and the Oracle GoldenGate Capture process are on different machines. This method effectively off-loads the processing requirements to a second machine. Customers may choose which option they prefer based on their requirements.   Additional information on Integrated Capture can be found in our documentation and the white paper “Oracle GoldenGate for Oracle”.

    Read the article

  • WiFi connected to router, but no internet connection

    - by Quetzacotl
    I just got a new notebook, a ThinkPad Edge E530, and installed Ubuntu on it. I'm pretty new to Ubuntu. On the same laptop, running Windows 7, the Wi-Fi connection works fine. Ethernet connection works both on Win7 and on Ubuntu. Only Wi-Fi on Ubuntu does not work; it connects to the Wi-Fi access point but I don't have Internet access. My wireless card is Intel Centrino Wireless-N 2230. What can fix the problem? EDIT: ifconfig -a eth0 Link encap:Ethernet HWaddr b8:88:e3:30:72:34 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:43 Base address:0x8000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:163 errors:0 dropped:0 overruns:0 frame:0 TX packets:163 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:10124 (10.1 KB) TX bytes:10124 (10.1 KB) usb0 Link encap:Ethernet HWaddr 02:15:e0:ec:01:00 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) wlan0 Link encap:Ethernet HWaddr 68:5d:43:43:71:e1 inet addr:192.168.2.101 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::6a5d:43ff:fe43:71e1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:40 errors:0 dropped:0 overruns:0 frame:0 TX packets:220 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2801 (2.8 KB) TX bytes:26230 (26.2 KB) route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.2.1 0.0.0.0 UG 0 0 0 wlan0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 wlan0 192.168.2.0 0.0.0.0 255.255.255.0 U 2 0 0 wlan0 cat /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 127.0.0.1 iwconfig lo no wireless extensions. usb0 no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"SATELITE" Mode:Managed Frequency:2.462 GHz Access Point: 00:1F:1F:8D:CC:08 Bit Rate=1 Mb/s Tx-Power=16 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=60/70 Signal level=-50 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:93 Invalid misc:243 Missed beacon:0 eth0 no wireless extensions.

    Read the article

  • State / Screen management in Entity Component Systems

    - by David Lively
    My entity/component system is happily humming along and, despite some performance concerns I initially had, everything is working fine. However, I've realized that I missed a crucial point when starting this thing: how do you handle different screens? At the moment, I have a GameManager class which owns a component manager and entity manager. When I create an entity, the entity manager assigns it an ID and makes sure it's tracked. When I modify the components that are assigned to an entity. an UpdateEntity method is called, which alerts each of the systems that they may need to add or remove the entity from their respective entity lists. A problem with this is that the collection of entities operated on by each system is determined solely by the individual Systems, typically based on a "required component" filter. (An entity has to have a Renderable component to be rendered, for instance.) In this situation, I can't just keep collections of entities per screen and only Update/Draw those collections. They'd have to either be added and removed depending on their applicability to the current screen, which would cause their associated components to be removed, or enable/disable entities in a group per screen to hide what's not supposed to be visible. These approaches seem like really, really crappy kludges. What's a good way to handle this? A pretty straightforward way that comes to mind is to create a separate GameManager (which in my implementation owns all of the systems, entities, etc.) per screen, which means that everything outside of the device context would be duplicated. That's bothersome because some things are always visible, or I might want to continue to display the game under a translucent menu window. Another option would be to add a "layer" key to the GameManager class, which could be checked against a displayable layer stack held by the game manager. *System.Draw() would be called for each active layer, in the required order as determined by the stack. When the systems request an iterator for their respective entity collections, it would be pre-filtered to a (cached) set of those entities that participate in the active layer. Those collections could be updated from the same UpdateEntity event that's already used to maintain each system's entity collections. Still, kinda feels like a hack. If I've coded myself into a corner, feel free to throw tomatoes as long as they're labeled with a helpful suggestion. Hooray for learning curves.

    Read the article

  • Test your internet connection - Emtel Mobile Internet

    After yesterday's report on Emtel Fixed Broadband (I'm still wondering where the 'fixed' part is), I did the same tests on Emtel Mobile Internet. For this I'm using the Huawei E169G HSDPA USB stick, connected to the same machine. Actually, this is my fail-safe internet connection and the system automatically switches between them if a problem, let's say timeout, etc. has been detected on the main line. For better comparison I used exactly the same servers on Speedtest.net. The results Following are the results of Rose Hill (hosted by Emtel) and respectively Frankfurt, Germany (hosted by Vodafone DE): Speedtest.net result of 31.05.2013 between Flic en Flac and Rose Hill, Mauritius (Emtel - Mobile Internet) Speedtest.net result of 31.05.2013 between Flic en Flac and Frankfurt, Germany (Emtel - Mobile Internet) As you might easily see, there is a big difference in speed between national and international connections. More interestingly are the results related to the download and upload ratio. I'm not sure whether connections over Emtel Mobile Internet are asymmetric or symmetric like the Fixed Broadband. Might be interesting to find out. The first test result actually might give us a clue that the connection could be asymmetric with a ratio of 3:1 but again I'm not sure. I'll find out and post an update on this. It depends on network coverage Later today I was on tour with my tablet, a Samsung Galaxy Tab 10.1 (model GT-P7500) running on Android 4.0.4 (Ice Cream Sandwich), and did some more tests using the Speedtest.net app. The results are actually as expected and in areas with better network coverage you will get better results after all. At least, as long as you stay inside the national networks. For anything abroad, it doesn't really matter. But see for yourselves: Speedtest.net result of 31.05.2013 between Cascavelle and servers in Rose Hill, Mauritius (Emtel - Mobile Internet), Port Louis, Mauritius and Kuala Lumpur, Malaysia It's rather shocking and frustrating to see how the speed on international destinations goes down. And the full capability of the tablet's integrated modem (HSDPA: 21 Mbps; HSUPA: 5.76 Mbps) isn't used, too. I guess, this demands more tests in other areas of the island, like Ebene, Pailles or Port Louis. I'll keep you updated... The question remains: Alternatives? After the publication of the test results on Fixed Broadband I had some exchange with others on Facebook. Sadly, it seems that there are really no alternatives to what Emtel is offering at the moment. There are the various internet packages by Mauritius Telecom feat. Orange, like ADSL, MyT and Mobile Internet, and there is Bharat Telecom with their Bees offer which is currently limited to Ebene and parts of Quatre Bornes.

    Read the article

  • Three Master Data Management Deployment Tips

    - by david.butler(at)oracle.com
    MDM is all about data quality and data governance. We now know that improved data quality raises all operational and analytical boats. But it's not just about deploying data quality tools. It's about deploying data quality tools within and across the IT landscape - from a thousand points of data entry to a single version of the truth. Here are three tips to deploying MDM across your applications and enterprise.   #1: Identify a tactical, high-value business problem where MDM can materially help. §  Support a customer acquisition and retention program with a 'customer' master data solution. §  Accelerate new products and services to market with a 'product' master data solution. §  Reduce supplier exceptions or support spend control initiatives with a 'supplier' master data solution. §  Support new store (branch, campus, restaurant, hospital, office, well head) location analysis with a 'site' master data solution. §  Fix long standing Chart of Accounts and Cost Center problems with a 'financial' master data solution. §  Support M&A activity, application upgrades, an SOA initiative, a cloud computing program, or a new business intelligence deployment by implementing a mix of master data solutions.   #2: Incrementally expand to a full information architecture. Quite often, the measurable return on interest from tactical MDM initiatives will fund future deployments. Over time, the MDM solution expands into its full architecture to cover the entire IT landscape. Operations and analytics are united, IT flexibility is restored, and sustainable competitive advantage is achieved.   #3: Bring business into every MDM deployment. To be successful, MDM must work hand in hand with data governance. In fact, Oracle MDM incorporates data governance tools for business users. IT can insure data quality, but only after the business side has defined what quality means. The business establishes the rules for governing the master data, and then IT enforces the rules via the MDM applications. Without this business/IT collaboration, MDM initiatives seldom achieve their full potential.   It is not very often that a technology comes along that can measurably assist organizations across a wide variety of top IT initiatives. Reducing costs, increasing flexibility, getting more out of existing assets, and aligning business and IT are not easy tasks for any CIO. But with MDM, success is achievable. IT can regain its place as a center for innovation.   For more information on this topic, take a look at my article Master Data Management Deployment Tips in the Opinion Section of Oracle's Profit Online magazine.

    Read the article

  • #SSAS #Tabular Workshop and Community Events in Netherlands and Denmark

    - by Marco Russo (SQLBI)
    Next week I will finally start the roadshow of the SSAS Tabular Workshop, a 2-day seminar about the new BISM Tabular model for Analysis Services that has been introduced in SQL Server 2012. During these roadshows, we always try to arrange some speeches at local community events in the evening - we already defined for Copenhagen, we have some logistic issue in Amsterdam that we're trying to solve. Here is the timetable: Netherlands SSAS Workshop in Amsterdam, NL – April 16-17, 2012 2-day seminar, I and Alberto will be the trainers for this event, register here We're trying to manage a Community event but we still don't have a confirmation, stay tuned        Denmark SSAS Workshop in Copenhagen, DK – April 26-27, 2012 2-day seminar, I and Alberto will be the trainers for this event, register here Community event on April 26, 2012 This event will run in Hellerup, at Microsoft venue All details available here: http://msbip.dk/events/26/msbip-mode-nr-5/ People from Sweden are welcome! Just register to this private group on LinkedIn in order to announce your presence, so we’ll know how many people will attend In community events we’ll deliver two speeches – here are the descriptions: Inside xVelocity (VertiPaq) PowerPivot and BISM Tabular models in Analysis Services share a great columnar-based database engine called xVelocity in-memory analytics engine (VertiPaq). If you want to improve performance and optimize memory used, you have to understand some basic principles about how this engine works, how data is compressed, and how you can design a data model for better optimization. Prepare yourself to change your mind. xVelocity optimization techniques might seem counterintuitive and are absolutely different than OLAP and SQL ones! Choosing between Tabular and Multidimensional You have a new project and you have to make an important decision upfront. Should you use Tabular or Multidimensional? It is not easy to answer, because sometime there is a clear choice, but most of the times both decisions might be correct, at least at the beginning. In this session we’ll help you making an informed decision, correctly evaluating pros and cons of each one according to common scenarios, considering both short-term and long-term consequences of your choice. I hope to meet many people in this first dates. We have many other events coming in May and June, including an online event (for US time zones), and you can also attend our PreCon Day at TechEd US in Orland (PRC06) or TechEd Europe in Amsterdam. I’ll be a good customer for airline companies in the next three months! I’m just sorry that I hadn’t time to write other articles in the last month, but I’m accumulating material that I will need to write down during some flight – stay tuned…

    Read the article

  • How to wrap console utils in webserver

    - by Alex Brown
    I have a big dataset (100Mbs/day) and a bunch of console a TCL/TK tools to view it - I want to turn it into a web app that I can build, and others can maintain. In long: my group runs simulations yielding 100s of Mbs of data daily, in multiple (mostly but not only) text forms. We have a bunch of scripts and tools, mostly old school 1990's style stuff requiring a 5-button mouse, as well as lots of ad-hoc scripts that engineers build out of frustration every month or so. These produces UIs, graphs, spreadsheets (various sizes), logs, event histories etc. I want to replace (or at least supplement) the xwindows / console style UI with a web-based one, so I need the following properties: pleasant to program can wrap existing command-line tools in separate views (I don't need to scrape GUIs or anything) as I port logic from the existing scripts I can create a modularised and pleasant codebase to replace it I can attach a web-ui to navigate between views - each view is likely to contain keys which might make sense to view in another I am new to building systems that have logic on the back-end and front-end of a web-server. from that point of view, they do this: backend wraps old-school executables, constructs calls into them and them takes the output and wraps it up, niceifies it and delivers it to the web client. For instance the tool might generate a number of indexed images (per invocation) which I might deliver all at once or on-demand. May (probably) need to to heavy stats on some sources. frontend provides navigation connecting multiple views, performs requests from one view for data from another (or self to self), etc. Probably will have some views with a lot of interactivity. Can people please point me towards viable solutions for this? I know it's a bit of an open question so as answers come in I hope to refine the spec until we have a good match. I guess I expect to see answers like "RoR!" "beans!" "Scala!" but please give an indication of why those are a good fit; I know nothing! I got bumped off SO for asking an open-ended question, so sorry if its OT here too (let me know). I take the policy that I use the best/closest matched language for a project but most of my team are extremely low level (ie pipeline stages and CDyn) so I don't have the peer group to know where to start.

    Read the article

  • Oracle is Sponsoring LinuxCon Europe 2012

    - by Zeynep Koch
    Architecture is amazing in Barcelona but you will also be impressed with Oracle Linux sessions in LinuxCon Europe as well.  Oracle is one of the key sponsors in LinuxCon Europe and we have great sessions to show you why Oracle Linux is best for your "IT architecture"! We also have a booth where you can pick up latest Oracle Linux and Oracle VM DVD Kit and Virtualization for Dummies booklet. Don't forget to visit us at technology showcase Booth #19. Oracle Sessions at LinuxCon Europe 2012:  1. OCFS2: Status and Overview - Lenz Grimmer, Oracle Wednesday November 7, 2012 10:40am - 11:25am Venue: Diamant OCFS2, Oracle's general-purpose shared-disk cluster file system for Linux has come a long way since its development started in 2003. Distributed under the GPL and part of the mainline Linux Kernel, it is also included in Oracle Linux and plays a vital role in products like Oracle VM, Oracle RAC or E-Business Suite. This presentation will provide a general technical overview as well as an update on the latest developments. Attendees will learn about the features and improvements that set OCFS2 apart from other Linux-based cluster file systems, including: Heartbeat implementation: global vs. local heartbeats Storage optimizations: Extent-based Allocations, Hole punching, Reflinks 2. Status of Linux Tracing - Elena Zannoni, Oracle Wednesday November 7, 2012 11:35am - 12:20am Venue: Diamant There have been many developments recently in the Linux tracing area. The tracing infrastructure in the kernel is getting more robust, with  the recent introduction of uprobes to allow the implementation of user  space tracing, and new features of perf. There are many tracing tools to choose from, including the newest kid on the block, DTrace for Linux.  This talk will take the audience through the main tracing facilities  available today whether more tightly integrated with the kernel code, or maintained stand alone. 3. MySQL Security Model and Pluggable Authentication - Kristofer Pettersson, Oracle Wednesday November 7, 2012 1:50pm - 2:35pm Venue: Diamant With an increasing security awareness among web and cloud developers, knowing how to secure your database from unauthorized or malicious access has become important. This talk explains the MySQL security model, pluggable authentication, new auditing features and rounds off with some pointers on how to securely integrate your database into your Linux web stack. We look forward to seeing you in Barcelona, Spain on November 5-9, 2012. Register today 

    Read the article

  • Sustainability Activities at Oracle OpenWorld

    - by Evelyn Neumayr
    Close to 50,000 participants will come to San Francisco for Oracle OpenWorld and JavaOne events, held September 30-October 4, 2012 at Moscone Center. Oracle is very conscious of the impact that these events have on the environment and, as part of its ongoing commitment to sustainability, has developed a sustainable event program-now in its fifth year-that aims to maximize positive benefits and minimize negative impacts in a variety of ways. Click here for more details. At the Oracle OpenWorld conference, there will be many sessions and even a hands-on lab which discuss the sustainability solutions that Oracle provides for our customers. I wanted to highlight a few of those sessions here so if you will be at Oracle OpenWorld, you can make sure to attend them. One of the most compelling sessions promises to be our “Eco-Enterprise Innovation Awards and the Business Case for Sustainability” session on Wednesday, October 3 from 10:15 a.m. to 11:15 a.m. in Moscone West 3005. Oracle Chairman of the Board Jeff Henley, Chief Sustainability Officer Jon Chorley, and other Oracle executives will honor select customers with Oracle's Eco-Enterprise Innovation award. This award recognizes customers and their respective partners who rely on Oracle products to support their green business practices in order to reduce their environmental impact, while improving business efficiencies and reducing costs. Another interesting session is the “Tracking, Reporting, and Reducing Environmental Impact with Oracle Solutions” which occurs on Monday, October 1 from 4:45 p.m. to 5:45 p.m. in Moscone West Room 2022. This session covers Oracle’s overall sustainability strategy as well as Oracle Environmental Accounting and Reporting (EA&R), which leverages Oracle ERP and BI solutions for accurate, efficient tracking of energy, emissions, and other environmental data. If you want more details, make sure to visit the hands-on lab titled “Oracle Environmental Accounting & Reporting for Integrated Sustainability Reporting”. This hour-long lab will take place on Tuesday, October 2 at 5:00 p.m. in the Marriott Marquis Hotel-Nob Hill CD. Here you can learn how to use Oracle EA&R to collect sustainability-related data in an efficient and reliable manner as part of existing business processes in Oracle E-Business Suite or JD Edwards Enterprise One. Register for this hands-on lab here.  

    Read the article

  • Default values - are they good or evil?

    - by Andrew
    The question about default values in general - default return function values, default parameter values, default logic for when something is missing, default logic for handling exceptions, default logic for handling the edge conditions etc. For a long time I considered default values to be a "pure evil" thing, something that "cloaks the catastrophe" and results in a very hard do find bugs. But recently I started to think about default values as some sort of a technical debt ... which is not a straight bad thing but something that could provide some "short term financing" get us to survive the project (how many of us could afford to buy a house without taking out the mortgage?). When I say a "short term" - I don't mean - "do something quickly first and do refactor it out later before it hits the production". No - I am talking about relying on a hardcoded default values in a production software. Granted - it could cause some issues, but what if it only going to cause a single trouble in a whole year. Again - I am talking about the "average" mainstream software here (not a software for a nuclear power station) - the average web site or a UI application for the accounting software, meaning that people lives are not at stake, nor millions of dollars. Again, from my experience, business users would rather live with the software which "works somehow", rather then wait for a perfect one. And the use of default values helps a lot if you develop a software in a RAD style. But again - the longest debug sessions I have spent were because of the bugs introduced by a default value which either stopped being "a default" along the way or because a small subsystem has recently been upgraded and as a result of this upgrade it does not handle the default correctly (e.g. empty list vs null, or null string vs empty string). So my question is - are the default values good or evil. And if they are a technical debt - how do measure up how much you can borrow so you can afford the repayments? Would really appreciate any input. Cheers. EDIT: If I am using the default values as a way to cut the corners during the development - and if the corners cutting results in a bugs and issues - what is the methodology to recover from these issues?

    Read the article

  • Outlying DBAs

    - by steveh99999
    Read an interesting book recently, ‘Outliers – the story of success’ by Malcolm Gladwell. There’s a good synopsis of the book here on wikipedia. I don’t want to write in detailed review of the book, but it’s well worth a read. There were a couple of sections which I thought were possibly relevant to IT professionals and DBAs in particular. Firstly, ‘the 10,000 hour rule’, in this section Gladwell asserts that to be a real ‘elite performer’ takes 10,000 hours of practice. ‘Practice isn’t the thing you do once you’re good, it’s the thing you do that makes you good’.  He gives many interesting examples – the Beatles, Bill Gates etc – but I was wondering could this be applied to DBAs ? If it takes 10,000 hours to be a really elite DBA – how long does that really take ? 8 hours a day makes 1250 days. If we assume that most DBAs work around 230 days a year – then it takes around 5 and a half years to become an elite DBA.   But how much time per day does a DBA spend actually doing DBA work ? Certainly it’s my experience that the more experienced I get as a DBA, the less time I seem to spend actually doing DBA work – ie meetings, change-control meetings, project planning, liasing with other teams, appraisals etc.  Is it more accurate to assume that a DBA spends half their time actually doing ‘real’ DBA work – or is that just my bad luck ?   So, in reality, I’d argue it can take at least 5 1/2 and more likely closer to 10 years to become an elite DBA. Why do I keep receiving CVs for senior DBAs with 2-4 years actual DBA experience ? In the second section I found particularly interesting, Gladwell writes about analysis of plane crashes and the importance of in-cockpit communications. He describes a couple of crashes involving Korean Airlines – where co-pilots were often deferrential to pilots, and unwilling to openly criticise their more senior colleagues or point out errors when things were going badly wrong… There’s a better summary of Gladwell’s concepts on mitigation  here – but to apply this to a DBA role… If you are a DBA and you do not agree with  a decision of one of your superiors, then it’s your duty as a DBA to say what you think is wrong, before it’s too late…  Obviously there’s a fine line between constructive criticism and moaning, but a good senior DBA or manager should be able to take well-researched criticism\debate from a more junior DBA.   Is this really possible ?

    Read the article

  • Sharing on Github

    - by Alan
    Over the past couple weeks I have gotten a lot of help from StackOverflow users on a project, and rather than keep the finished product to myself I wanted to share it unencumbered by licenses, but don't want there to be so much legwork during installation that users shy away from trying it. I am about to post it to Github and choosing public domain licensing. I would like to to be super simple for users to make use of and just FTP it up and go. That being said, do I need to make sure I remove things like the JQuery file, and other GPL / MIT licensed dependencies that I didn't write but that my code depends on? I haven't removed any copyright notices from the other code and all of it open source, it would just be nice if users could download everything at once while of course not trying to represent that I am the license holder of the dependencies. Inside my files are also some snippets, do those have to be externalized with installation instructions or can it be posted as is? Here is an example, my nav.php file is 115 lines long and I have these at the top: <script type="text/javascript" src="./js/ddaccordion.js"> /*********************************************** * Accordion Content script- (c) Dynamic Drive DHTML code library (www.dynamicdrive.com) * Visit http://www.dynamicDrive.com for hundreds of DHTML scripts * This notice must stay intact for legal use ***********************************************/ </script> <link href="css/admin.css" rel="stylesheet"> <script type="text/javascript"> ddaccordion.init({ headerclass: "submenuheader", //Shared CSS class name of headers group contentclass: "submenu", //Shared CSS class name of contents group revealtype: "click", //Reveal content when user clicks or onmouseover the header? Valid value: "click", "clickgo", or "mouseover" mouseoverdelay: 200, //if revealtype="mouseover", set delay in milliseconds before header expands onMouseover collapseprev: false, //Collapse previous content (so only one open at any time)? true/false defaultexpanded: [], //index of content(s) open by default [index1, index2, etc] [] denotes no content onemustopen: false, //Specify whether at least one header should be open always (so never all headers closed) animatedefault: false, //Should contents open by default be animated into view? persiststate: true, //persist state of opened contents within browser session? toggleclass: ["", ""], //Two CSS classes to be applied to the header when it's collapsed and expanded, respectively ["class1", "class2"] togglehtml: ["suffix", "<img src='./images/plus.gif' class='statusicon' />", "<img src='./images/minus.gif' class='statusicon' />"], //Additional HTML added to the header when it's collapsed and expanded, respectively ["position", "html1", "html2"] (see docs) animatespeed: "fast", //speed of animation: integer in milliseconds (ie: 200), or keywords "fast", "normal", or "slow" oninit:function(headers, expandedindices){ //custom code to run when headers have initalized //do nothing }, onopenclose:function(header, index, state, isuseractivated){ //custom code to run whenever a header is opened or closed //do nothing } }) </script>

    Read the article

  • Tales of a corrupt SQL log

    - by guybarrette
    Warning: I’m a simple dev, not an all powerful DBA with godly powers. This morning, one of my sites was down and DNN reported a problem with the database.  A quick series of tests revealed that the culprit was a corrupted log file. Easy fix I said, I have daily backups so it’s just a mater of restoring a good copy of the database and log files.  Well, I found out that’s not exactly true.  You see, for this database, I have daily file backups and these are not database backups created by SQL Server. So I restored a set of files from a couple of days ago, stopped the SQL service, copied the files over the bad ones, restarted the service only to find out that SQL doesn’t like when you do that.  It suspects something fishy and marks the database as suspect.  A database marked as suspect can’t be accessed at all.  So now what? I searched throughout the tubes of the InterWeb and found that you can restore from a corrupted log file by creating a new database with the same name as the defective one, then copy the restored database file (the one with data) over the newly created one.  Sweet!  But you still end up with SQL marking the database as suspect but at least, the newly created log is OK.  Well not true, it’s not corrupted but the lack of data makes it not OK for SQL so you need to rebuild the log.  How can you do that when SQL blocks any action the database?  First, you need to change the database status from suspect to emergency.  Then you need to set the database for single access only.  After that, you need to repair the log with DBCC and do the DBA dance.  If you dance long enough, SQL should repair the log file.  Now you need to set the access back to multi user.  Here’s the T-SQL script: use master GO EXEC sp_resetstatus 'MyDatabase' ALTER DATABASE MyDatabase SET EMERGENCY Alter database MyDatabase set Single_User DBCC checkdb('MyDatabase') ALTER DATABASE MyDatabase SET SINGLE_USER WITH ROLLBACK IMMEDIATE DBCC CheckDB ('MyDatabase', REPAIR_ALLOW_DATA_LOSS) ALTER DATABASE MyDatabase SET MULTI_USER So I guess that I would have been a lot easier to restore a SQL backup.  I can’t really say but the InterWeb seems to say so.  Anyway, lessons learned: Vive la différence: File backups are different then SQL backups. Don’t touch me: SQL doesn’t like when you restore a file over a corrupted one. The more the merrier: You should do both SQL and file backups. WTF?: The InterWeb provides you with dozens of way to deal with the problem but many are SQL 2000 or SQL 2005 only, many are confusing and many are written in strange dialects only DBAs understand. var addthis_pub="guybarrette";

    Read the article

< Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >