Search Results

Search found 590 results on 24 pages for 'tony'.

Page 1/24 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • ?Oracle??HCM????Tony Kender???7???

    - by user758881
    Oracle?????Tony Kender,????HCM?????????,??????????????????????? ?: ????????HCM????,???????????? Kender: ?HCM?????????,????????????????Oracle??????????? ?????,?? ERP, Supply Chain, EPM, ? CX,??????,????????????????HCM? ?: ????????????????HCM???? Kender:????,??????????????,?????????????????????????HCM????????——??????????????????????????????????????????????????????HCM???,????????????,?????,?????????,??????????HR?Payroll??? ?: ??????????HCM????,??????? Kende:????????????????,????????,????????????????????????HCM??3??4????5??????????????????????????????????????????,???????????HCM???????????,??????HR??,????????????????????????????????Oracle????????????????HCM?????????????????? ?: Oracle HCM?????????? Kender:???,Oracle?HCM????????HCM???????????????????HCM?,?????????,???????JD Edwards, PeopleSoft ?Oracle E-Business Suite HCM??????????????HCM?????????????,??????????? ?: ??????????Oracle HCM?? Kender:??,?????????????????,???????????Oracle HCM????????????HCM????,??????????????????????IT????????,??????????????????——?????????????????,???????????????????,?? Customer 2 Cloud,?????Oracle HCM on premise???????????Oracle HCM???? ?: ?????????HCM???????? Kender:??,???????????:“?????????????????????????,???????????????????????????????????” Oracle HCM?????????????????????????????,????????????Larry Ellison?Oracle HCM World? keynote ??:“?Facebook?????”? ?: ????????HCM?????,???????????? Kender:????????,????????????????????????????????????????????????????????????????????????????????,????????????????????????????,???????90????????,??????????????Oracle HCM????????????????? ??,????????,???????????????????????????????????????????????,?????????????????????????,???????????????????????Oracle HCM?,??????????????????HCM????????——?????????????????,???on premise ??????????

    Read the article

  • Java Spotlight Episode 86: Tony Printezis on Garbage Collection First

    - by Roger Brinkley
    Interview with Tony Printezis on Garbage Collection First (GC1). Joining us this week on the Java All Star Developer Panel is Arun Gupta, Java EE Guy. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News JSR 358: A major revison of the Java Community Process - JCP 3.Next JAX-RS 2.0 Early Draft- Third Edition Events June 11-14, Cloud Computing Expo, New York City June 12, Boulder JUG June 13, Denver JUG June 13, Eclipse Juno DemoCamp, Redwoood Shore June 13, JUG Münster June 14, Java Klassentreffen, Vienna, Austria June 18-20, QCon, New York City June 19, CJUG, Chicago June 20, 1871, Chicago June 26-28, Jazoon, Zurich, Switzerland Jun 27, Houston JUG ?? July 5, Java Forum, Stuttgart, Germany Jul 13-14, IndicThreads, Delhi July 30-August 1, JVM Language Summit, Santa Clara Feature InterviewTony Printezis is a Principal Member of Technical Staff at Oracle, based in Burlington, MA. He has been contributing to the Java HotSpot Virtual Machine since 2006. He spends most of his time working on dynamic memory management for the Java platform, concentrating on performance, scalability, responsiveness, parallelism, and visualization of garbage collectors. He obtained a Ph.D. in 2000 and a BSc (Hons) in 1995, both from the University of Glasgow in Scotland. In addition, he is a JavaOne Rock Star, a title awarded for his highly rated JavaOne session on GC. Mail Bag What’s Cool JavaOne content selection is complete. Notifications done.

    Read the article

  • SQL User Group Events coming - Cambridge, Leeds, Manchester and Edinburgh

    - by tonyrogerson
    Neil Hambly and myself are presenting next week in Cambridge, Neil will be showing us how to use tools at hand to determine the current activity on your database servers and I'll be doing a talk around Disaster Recovery and High Availability and the options we have at hand.The User Group is growing in size and spread, there is a Southampton event planned for the 9th Dec - make sure you keep your eyes peeled for more details - the best place is the UK SQL Server User Group LinkedIn area.Want removing from this email list? Then just reply with remove please on the subject line.Cambridge SQL UG - 25th Nov, EveningEvening Meeting, More info and registerNeil Hambly on Determining the current activity of your Database Servers, Product demo from Red-Gate, Tony Rogerson on HA/DR/Scalability(Backup/Recovery options - clustering, mirroring, log shipping; scaling considerations etc.)Leeds SQL UG - 8th Dec, EveningEvening Meeting, More info and registerNeil Hambly will be talking about Index Views and Computed Columns for Performance, Tony Rogerson will be showing some advanced T-SQL techniques.Manchester SQL UG - 9th Dec, EveningEvening Meeting, More info and registerEnd of year wrap up, networking, drinks, some discussions - more info to follow soon.Edinburgh SQL UG - 9th Dec, EveningEvening Meeting, More info and registerSatya Jayanty will give an X factor for a DBAs life and Tony Rogerson will talk about SQL Server internals.Many thanks,Tony Rogerson, SQL Server MVPUK SQL Server User Grouphttp://sqlserverfaq.com

    Read the article

  • System cant mount root

    - by Tony Parkes
    Im running ubuntu 12.10, i have installed NETKIT as i need it for my university work. It is installed fine and the NETKIT configuration is ok when checked but when i try to launch a virtual machine it appears on screen for about half a second and then vanishes. I did a verbose launch and im told that the system cant mount root, i can however launch an xterm...any help would be greatly appreciated. Tony

    Read the article

  • Tuesday 6th Manchester SQL User Group - Chris Testa-O'Neil (Loading a datawarehouse using SSIS) and

    - by tonyrogerson
    Chris will give a talk on Loading a datawarehouse using SQL Server Integration Services, Tony Rogerson will give a talk on Database Design: Normalisation/Denormalisation and using Surrogate Keys - practicalities/pitfalls and benefits in Microsoft SQL Server. Registration is essential which you can do here: http://sqlserverfaq.com?eid=218 . Come and join us for an evening of SQL Server discussion, as well as the two formal sessions by Chris Testa-O'Neil and Tony Rogerson there will be a chance...(read more)

    Read the article

  • UG Session - Service Broker & Indexing

    - by NeilHambly
    SQL Server User Group Session in Reading this Wednesday (21st April 2010 6pm - 10pm) Along with Tony Rogerson MVP, I {Neil Hambly} will be presenting @ the forthcoming User Group meeting @ Microsoft Campus, Reading Tony will be presenting the session he gave @ SQLBits VI on Thinking Sets, Normalisation, Surrogate Keys, Referential Integrity This is very insightful and was a very popular session. I will be continuing my recent presentation on Indexed views @ London UG, this time i will be doing a...(read more)

    Read the article

  • Google I/O 2012 - The Sensitive Side of Android

    Google I/O 2012 - The Sensitive Side of Android Tony Chan, Ankur Kotwal , Tim Bray, Tony Chan Android has a sensitive side. In this session, we will call out all the Android sensors: accelerometer, gyroscope, light, and more. We'll cover best practices for handling sensor data, with special focus on balancing battery life and usability. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 2157 35 ratings Time: 56:06 More in Science & Technology

    Read the article

  • PASS 13 Dispatches: Memory Optimized = On

    - by Tony Davis
    I'm at the PASS Summit in Charlotte for the Day 1 keynote by Quentin Clarke, Corporate VP of the data platform group at Microsoft. He's talking about how SQL Server 2014 is “pushing boundaries” and first up is SQL Server 2014's In-Memory OLTP technology (former codename “hekaton”) It is a feature that provokes a lot of interest and for good reason as, without any need for application rewrites or hardware updates, it can enable us to ensure that an application can find in memory most or all of the data it needs, and can lead to huge improvements in processing times. A good recent hekaton use cases article talks about applications that need a “Shock Absorber” when either spikes or just a high rate of incoming workload (including data in ETL scenarios) become a primary bottleneck. To get a really deep look at this technology, I would check out David DeWitt's summit keynote tomorrow (it will be live streamed). Other than that, to get started I'd recommend Kalen Delaney's whitepaper. She offers a lot of insight into how it works and how to start to define memory-optimized tables, and natively compiled stored procedures. These memory-optimized tables uses completely optimistic multi-version concurrency control – no waiting on locks! After that, Tom LaRock has compiled a useful set of links to drill deeper, and includes one to Microsoft's AMR tool to help you gauge the tables that might benefit most. Tony.

    Read the article

  • Ubuntu eats itself after I followed updater instruction

    - by Tony Martin
    Updater (I assume) put a no entry style alert icon on the panel which informed me that certain package dependencies were not up to snuff. Upgrades were thereafter only partial. The dialogue advised that I (and this is from noob memory) sudo apt-get install -f. I did this and typed in the confirmation phrase and watched apt-get systematically remove every component of linux, both the stuff I installed and the core ubuntu packages. I could only assume at this stage that this was for a fresh install but of course, I know better now. There's much complaint about Windows, but I've never met with advice from Microsoft tools to wipe out the operating system because of a couple of missing .dlls. So what gives? This was a 64 bit install of 12.04. All that is left is grub pointing to a couple of windows recovery partitions on the hard drive. I'm tempted, but I have hopes of recovering the data that I had enough misguided faith to trust to the linux ext4 partition. I've tried pen driving back into it with a 32 bit iso but I'm simply informed that ubuntu is running in low graphics mode and get to watch the dots cycle indefinitely. EDIT: Thanks for the advice vis positive request. I've got onto the machine with a 64 bit stick and can see the file structure left behind by the installation. My first instinct was to run install from the stick but it did not seem to offer a recovery option. My question then: is there a way to recover the current installation so that if I reinstall the packages I had they will pick up the original settings. I'm particularly worried about losing email from evolution - the rest I could probably lash back together. I would also be interested how this disaster came about. I see people in the know recommending this same procedure in similar circumstances. Thanks for your attention, Tony Martin

    Read the article

  • Copy wrongs and Copyright

    - by Tony Davis
    Recently, a Chinese blog website copied, wholesale and without permission, a Simple-Talk article on troubleshooting locking and blocking. Our initial reaction was exasperation and anger, tempered slightly by the fact that there was, at the top, a clear link to the original, and the book from which it was extracted. On the day the copy was posted, our original article saw a 30K spike in visits, so the site clearly has a substantial following! This made us pause for thought. Indeed, we wondered whether it might not be more profitable, and certainly more enjoyable, to notify the offender of similar content and serve a "put up" notice, rather than the usual DMCA "take down" . The DMCA request, issued to protect our and our authors' assets, is a necessary but tiresome, chore. So often, simple communication and negotiation could have averted the need for it. We are, after all, in the business of presenting knowledge, information and help to the SQL Server Community. If only they had asked! Of course, one's attitude changes according to the motivation behind the copying of content. One of the motivations seems to be pure vanity; they do it to try to enhance their CV, or their company's expertise, by pretending to expertise they don't possess. There is a class of plagiariser, however, that is doing it purely for money, getting advertising revenue by attracting hapless readers to their site. Not content with stealing content, sites can invest in services that provide 'load-testing' for websites that is so realistic that even the search engines can be fooled. Stolen content, fake visitors, swindled advertisers. Zero-tolerance is really the only way of dealing with plagiarism, and action will only be completely effective once Bing, Google, and the other search engines strike out from their listings the rogue sites that refuse to take down plagiarised content. It is, after all in everyone else's interests. Cheers, Tony.

    Read the article

  • iTunes contact sync issue

    - by Tony
    Last night when I sync my contacts to my iPhone, it wiped 95% of my contacts on my iPhone :( I have tried to re-sync many times but no joy. I've even exported all my contacts into Windows Contacts (im using Vista), then tried to re-sync in iTunes using that. Note: I have the lastest iTunes and iPhone updates. Many thanks! Tony.

    Read the article

  • Event on SQL Server 2008 Disk IO and the new Complex Event Processing (StreamInsight) feature in R2

    - by tonyrogerson
    Allan Mitchell and myself are doing a double act, Allan is becoming one of the leading guys in the UK on StreamInsight and will give an introduction to this new exciting technology; on top of that I'll being talking about SQL Server Disk IO - well, "Disk" might not be relevant anymore because I'll be talking about SSD and IOFusion - basically I'll be talking about the underpinnings - making sure you understand and get it right, how to monitor etc... If you've any specific problems or questions just ping me an email [email protected]. To register for the event see: http://sqlserverfaq.com/events/217/SQL-Server-and-Disk-IO-File-GroupsFiles-SSDs-FusionIO-InRAM-DBs-Fragmentation-Tony-Rogerson-Complex-Event-Processing-Allan-Mitchell.aspx 18:15 SQL Server and Disk IOTony Rogerson, SQL Server MVPTony's Blog; Tony on TwitterIn this session Tony will talk about RAID levels, how SQL server writes to and reads from disk, the effect SSD has and will talk about other options for throughput enhancement like Fusion IO. He will look at the effect fragmentation has and how to minimise the impact, he will look at the File structure of a database and talk about what benefits multiple files and file groups bring. We will also touch on Database Mirroring and the effect that has on throughput, how to get a feeling for the throughput you should expect.19:15 Break19:45 Complex Event Processing (CEP)Allan Mitchell, SQL Server MVPhttp://sqlis.com/sqlisStreamInsight is Microsoft’s first foray into the world of Complex Event Processing (CEP) and Event Stream Processing (ESP).  In this session I want to show an introduction to this technology.  I will show how and why it is useful.  I will get us used to some new terminology but best of all I will show just how easy it is to start building your first CEP/ESP application.

    Read the article

  • Silverlight Cream for November 21, 2011 -- #1171

    - by Dave Campbell
    In this Issue: Colin Eberhardt, Sumit Dutta, Morten Nielsen, Jesse Liberty, Jeff Blankenburg(-2-), Brian Noyes, and Tony Champion. Above the Fold: Silverlight: "PV Basics : Client-side Collections" Tony Champion WP7: "Pushpin Clustering with the Windows Phone 7 Bing Map control" Colin Eberhardt Shoutouts: Michael Palermo's latest Desert Mountain Developers is up Michael Washington's latest Visual Studio #LightSwitch Daily is up From SilverlightCream.com: Pushpin Clustering with the Windows Phone 7 Bing Map control Colin Eberhardt is back discussing Pushpins for a BingMaps app on WP7 and provides a utility class that clusters pushpins, allowing you to render 1000s of pins on an app ... all the explanation and all the code Part 22 - Windows Phone 7 - Tile Push Notification Part 22 in Sumit Dutta's WP7 series is about Tile Push Notification... nice tutorial with all the code listed Correctly displaying your current location Morten Nielsen demonstrates formatting the information from the GPS on your WP7 into something intelligible and useful Spiking the Pomodoro Timer Jesse Liberty put up a quick and dirty version of a Pomodoro timer for WP7.1 to explore the technical challenges of the Full Stack Phase 2 he's cranking up 31 Days of Mango | Day #11: Network Jeff Blankenburg's Number 10 in his 31 Days quest of WP7.1 is about the NetworkInformation namespace which gives you all sorts of info on the user's device network connection availability, type, etc. 31 Days of Mango | Day #11: Live Tiles Jeff Blankenburg takes off on Live Tiles for Day 11... big topic for a 1 day post, but he takes off on it... updating and Live Tiles too Working with Prism 4 Part 2: MVVM Basics and Commands Brian Noyes has part 2 of his Prism/MVVM series up at SilverlightShow... very nice tutorial on the basics of getting a view and viewmodel up, and setting up an ICommand to launch an Edit View... plus the code to peruse. PV Basics : Client-side Collections Tony Champion is startig a series on Silverlight 5 and Pivot Viewer... First up is some basics in dealing with the control in SL5 and talking about Client-side Collections... great informative tutorial and all the code Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Need to Know

    - by Tony Davis
    Sometimes, I wonder whether writers of documentation, tutorials and articles stop to ask themselves one very important question: Does the reader really need to know this? I recently took on the task of writing a concise series of articles about the transaction log, what is it, how it works and why it's important. It was an enjoyable task; rather like peering inside a giant, complex clock mechanism. Initially, one sees only the basic components, which work to guarantee the integrity of database transactions, and preserve these transactions so that data can be restored to a previous point in time. On closer inspection, one notices all of small, arcane mechanisms that are necessary to make this happen; LSNs, virtual log files, log chains, database checkpoints, and so on. It was engrossing, escapist, stuff; what I'd written looked weighty and steeped in mysterious significance. Suddenly, however, I jolted myself back to reality with the awful thought "does anyone really need to know all this?" The driver of a car needs only to be dimly aware of what goes on under the hood, however exciting the mechanism is to the engineer. Similarly, while everyone who uses SQL Server ought to be aware of the transaction log, its role in guaranteeing the ACID properties, and how to control its growth, the intricate mechanisms ticking away under its clock face are a world away from the daily work of the harassed developer. The DBA needs to know more, such as the correct rituals for ensuring optimal performance and data integrity, setting the appropriate growth characteristics, backup routines, restore procedures, and so on. However, even then, the average DBA only needs to understand enough about the arcane processes to spot problems and react appropriately, or to know how to Google for the best way of dealing with it. The art of technical writing is tied up in intimate knowledge of your audience and what they need to know at any point. It means serving up just enough at each point to help the reader in a practical way, but not to overcook it, or stuff the reader with information that does them no good. When I think of the books and articles that have helped me the most, they have been full of brief, practical, and well-informed guidance, based on experience. This seems far-removed from the 900-page "beginner's guides" that one now sees everywhere. The more I write and edit, the more I become convinced that the real art of technical communication lies in knowing what to leave out. In what areas do the SQL Server technical materials suffer from "information overload"? Where else does it seem that concise, practical advice is drowned out by endless discussion of the "clock mechanisms"? Cheers, Tony.

    Read the article

  • So it comes to PASS…

    - by Tony Davis
    How does your company gauge the benefit of attending a technical conference? What's the best change you made as a direct result of attendance? It's time again for the PASS Summit and I, like most people go with a set of general goals for enhancing technical knowledge; to learn more about PowerShell, to drill into SQL Server performance tuning techniques, and so on. Most will write up a brief report on the event for the rest of the team. Ideally, however, it will go a bit further than that; each conference should result in a specific improvement to one of your systems, or in the way you do your job. As co-editor of Simple-talk.com, and responsible for the majority of our SQL books, my “high level” goals don't vary much from conference to conference. I'm always on the lookout for good new authors. I target interesting new technologies and tools and try to learn more. I return with a list of actions, new articles to commission, and potential new authors. Three years ago, however, I started setting myself the goal of implementing “one new thing” after each conference. After one, I adopted Kanban for managing my workload, a technique that places strict limits on “work in progress” and makes the overall workload, and backlog, highly visible. After another I trialled a community book project. At PASS 2010, one of my general goals was to delve deeper into SQL Server transaction log mechanics, but on top of that, I set a specific goal of writing something useful on the topic. I started a Stairway series and, ultimately, it's turned into a book! If you're attending the PASS Summit this year, take some time to consider what specific improvement or change you'll implement as a result. Also, try to drop by the Red Gate booth (#101). During the Vendor event on Wednesday evening, Gail Shaw and I will be there to discuss, and hand out copies of the book. Cheers, Tony.  

    Read the article

  • Emoti-phrases

    - by Tony Davis
    Surely the next radical step in the development of User-interface design is for applications to react appropriately to the rising tide of bewilderment, frustration or antipathy of the users. When an application understands that rapport is lost, it should respond accordingly. When we, for example, become confused by an unforgiving interface, shouldn't there be a way of signalling our bewilderment and having the application respond appropriately? There is surprisingly little in the current interface standards that would help. If we're getting frustrated with an unresponsive application, perhaps we could let it know of our increasing irritation by means of an "I'm getting angry and exasperated" slider. Although, by 'responding appropriately', I don't include playing a "we are experiencing unusually heavy traffic: your application usage is important to us" message, accompanied by calming muzak. When confronted with a tide of wizards, 'are you sure?' messages, or page-after-page of tiresome and barely-relevant options, how one yearns for a handy 'JFDI' (Just Flaming Do It) button. One click and the application miraculously desists with its annoying questions and just gets on with the job, using the defaults, or whatever we selected last time. Much more satisfying, and more direct to most developers and DBAs, however, would be the facility to communicate to the application via a twitter-style input field, or via parameters to command-line applications ("I don't want a wide-ranging debate with you; just open the bl**dy PDF!" or, or "Don't forget which of us has the close button"). Although to avoid too much cultural-dependence, perhaps we should take the lead from emoticons, and use a set of standardized emoti-phrases such as 'sez you', 'huh?', 'Pshaw!', or 'meh', which could be used to vent a range of feelings in any given application, whether it be SQL Server stubbornly refusing to give us the result we are expecting, or when an online-survey is getting too personal. Or a 'Lingua Glaswegia' perhaps: 'Atsabelter' ("very good") 'Atspish' ("must try harder") 'AnThenYerArsFellAff ' ("I don't quite trust these results") 'BileYerHeid' or 'ShutYerGub' ("please stop these inane questions") There would, of course, have to be an ANSI standards body to define the phrases that were acceptable. Presumably, there would be a tussle amongst the different international standards organizations. Meanwhile Oracle, Microsoft and Apple would each release non-standard extensions. Time then, surely, to plant emoti-phrases on the lot of them and develop a user-driven consensus. Send us your suggestions! The best one will win an iPod nano! Cheers! Tony.

    Read the article

  • On Writing Blogs

    - by Tony Davis
    Why are so many blogs about IT so difficult to read? Over at SQLServerCentral.com, we do a special subscription-only newsletter called Database Weekly. Every other week, it is my turn to look through all the blogs, news and events that might be of relevance to people working with databases. We provide the title, with the link, and a short abstract of what you can expect to read. It is a popular service with close to a million subscribers. You might think that this is a happy and fascinating task. Sometimes, yes. If a blog comes to the point quickly, and says something both interesting and original, then it has our immediate attention. If it backs up what it says with supporting material, then it is more-or-less home and dry, featured in DBW's list. If it also takes trouble over the formatting and presentation, maybe with an illustration or two and any code well-formatted, then we are agog with joy and it is marked as a must-visit destination in our blog roll. More often, however, a task that should be fun becomes a routine chore, and the effort of trawling so many badly-written blogs is enough to make any conscientious Health & Safety officer whistle through their teeth at the risk to the editor's spiritual and psychological well-being. And yet, frustratingly, most blogs could be improved very easily. There is, I believe, a simple formula for a successful blog. First, choose a single topic that is reasonably fresh and interesting. Second, get to the point quickly; explain in the first paragraph exactly what the blog is about, and then stay on topic. In writing the first paragraph, you must picture yourself as a pilot, hearing the smooth roar of the engines as your plane gracefully takes air. Too often, however, the accompanying sound is that of the engine stuttering before the plane veers off the runway into a field, and a wheel falls off. The author meanders around the topic without getting to the point, and takes frequent off-radar diversions to talk about themselves, or the weather, or which friends have recently tagged them. This might work if you're J.D Salinger, or James Joyce, but it doesn't help a technical blog. Sometimes, the writing is so convoluted that we are entirely defeated in our quest to shoehorn its meaning into a simple summary sentence. Finally, write simply, in plain English, and in a conversational way such that you can read it out loud, and sound natural. That's it! If you could also avoid any references to The Matrix then this is a bonus but is purely personal preference. Cheers, Tony.

    Read the article

  • A Plea for Plain English

    - by Tony Davis
    The English language has, within a lifetime, emerged as the ubiquitous 'international language' of scientific, political and technical communication. On the one hand, learning a single, common language, International English, has made it much easier to participate in and adopt new technologies; on the other hand it must be exasperating to have to use English at international conferences, or on community sites, when your own language has a long tradition of scientific and technical usage. It is also hard to master the subtleties of using a foreign language to explain advanced ideas. This requires English speakers to be more considerate in their writing. Even if you’re used to speaking English, you may be brought up short by this sort of verbiage… "Business Intelligence delivering actionable insights is becoming more critical in the enterprise, and these insights require large data volumes for trending and forecasting" It takes some imagination to appreciate the added hassle in working out what it means, when English is a language you only use at work. Try, just to get a vague feel for it, using Google Translate to translate it from English to Chinese and back again. "Providing actionable business intelligence point of view is becoming more and more and more business critical, and requires that these insights and projected trends in large amounts of data" Not easy eh? If you normally use a different language, you will need to pause for thought before finally working out that it really means … "Every Business Intelligence solution must be able to help companies to make decisions. In order to detect current trends, and accurately predict future ones, we need to analyze large volumes of data" Surely, it is simple politeness for English speakers to stop peppering their writing with a twisted vocabulary that renders it inaccessible to everyone else. It isn’t just the problem of writers who use long words to give added dignity to their prose. It is the use of Colloquial English. This changes and evolves at a dizzying rate, adding new terms and idioms almost daily; it is almost a new and separate language. By contrast, ‘International English', is gradually evolving separately, at its own, more sedate, pace. As such, all native English speakers need to make an effort to learn, and use it, switching from casual colloquial patter into a simpler form of communication that can be widely understood by different cultures, even if it gives you less credibility on the street. Simple-Talk is based, at least in part, on the idea that technical articles can be written simply and clearly in a form of English that can be easily understood internationally, and that they can be written, with a little editorial help, by anyone, and read by anyone, regardless of their native language. Cheers, Tony.

    Read the article

  • Fair Comments

    - by Tony Davis
    To what extent is good code self-documenting? In one of the most entertaining sessions I saw at the recent PASS summit, Jeremiah Peschka (blog | twitter) got a laugh out of a sleepy post-lunch audience with the following remark: "Some developers say good code is self-documenting; I say, get off my team" I silently applauded the sentiment. It's not that all comments are useful, but that I mistrust the basic premise that "my code is so clearly written, it doesn't need any comments". I've read many pieces describing the road to self-documenting code, and my problem with most of them is that they feed the myth that comments in code are a sign of weakness. They aren't; in fact, used correctly I'd say they are essential. Regardless of how far intelligent naming can get you in describing what the code does, or how well any accompanying unit tests can explain to your fellow developers why it works that way, it's no excuse not to document fully the public interfaces to your code. Maybe I just mixed with the wrong crowd while learning my favorite language, but when I open a stored procedure I lose the will even to read it unless I see a big Phil Factor- or Jeff Moden-style header summarizing in plain English what the code does, how it fits in to the broader application, and a usage example. This public interface describes the high-level process and should explain the role of the code, clearly, for fellow developers, language non-experts, and even any non-technical stake holders in the project. When you step into the body of the code, the low-level details, then I agree that the rules are somewhat different; especially when code is subject to frequent refactoring that can quickly render comments redundant or misleading. At their worst, here, inline comments are sticking plaster to cover up the scars caused by poor naming conventions, failure in clarity when mapping a complex domain into code, or just by not entirely understanding the problem (/ this is the clever part). If you design and refactor your code carefully so that it is as simple as possible, your functions do one thing only, you avoid having two completely different algorithms in the same piece of code, and your functions, classes and variables are intelligently named, then, yes, the need for inline comments should be minimal. And yet, even given this, I'd still argue that many languages (T-SQL certainly being one) just don't lend themselves to readability when performing even moderately-complex tasks. If the algorithm is complex, I still like to see the occasional helpful comment. Please, therefore, be as liberal as you see fit in the detail of the comments you apply to this editorial, for like code it is bound to increase its' clarity and usefulness. Cheers, Tony.

    Read the article

  • Music before bells and whistles

    - by Tony Davis
    Why is it that Windows has so much difficulty in finding content on its file system? This is not an insurmountable technical problem; on my laptop, I have a database within which I can instantly find text or names within millions of records, within 300 milliseconds. I have a copy of Google Desktop that can find phrases within emails or documents, almost as quickly. It is an important, though mundane, part of an operating system to be able to find files. The first thing I notice within Windows is that the facility to find files or text within files is called 'search' rather than 'find'. Hmm. This doesn’t bode well. What’s this? It does a brute-force search for file names? Here we are in an age when we can breed mice that glow in the dark, and manufacture computers that fit in our shirt pockets, and we find an operating system that is still entirely innocent of managing and indexing content in hierarchical data. I can actually read the files of my PC into a database, mimic the directory/folder hierarchies and then find files in a flash; but when I do the same with Windows Vista, we are suddenly back in a 1960s time warp. Finding files based on their name is bad enough, but finding files based on the content that they contain is more or less asking for an opportunity to wait 20 minutes in order to see a "file not found" message. Sadly, with Windows 7, Microsoft seems to have fallen into the familiar trap of adding bells and whistles before finishing the song. It's certainly true that Microsoft has added new features and a certain polish to Windows Search 4.0, the latest incarnation. It works more like a web search and offers a new search syntax, called Advanced Query Syntax, which allows you to search on file author, file size, date ranges (e.g. date:=7/4/09still does not work reliably. I've experienced first-hand its stubborn refusal, despite a full index, to acknowledge the existence of a file I know exists, based on a search for a specific term within that file that I know is in there somewhere; a file that Google Desktop search, or old wingrep, finds in seconds. When users hark back to the halcyon days of Windows XP search, you know something is seriously amiss. Shouldn't applications get the functionality right before applying animated menus and Teletubby graphics, or is advancing age making me grumpy? I’d be pleased to hear your views, as always. Cheers, Tony.

    Read the article

  • Going Metro

    - by Tony Davis
    When it was announced, I confess was somewhat surprised by the striking new "Metro" User Interface for Windows 8, based on Swiss typography, Bauhaus design, tiles, touches and gestures, and the new Windows Runtime (WinRT) API on which Metro apps were to be built. It all seemed to have come out of nowhere, like field mushrooms in the night and seemed quite out-of-character for a company like Microsoft, which has hung on determinedly for over twenty years to its quaint Windowing system. Many were initially puzzled by the lack of support for plug-ins in the "Metro" version of IE10, which ships with Win8, and the apparent demise of Silverlight, Microsoft's previous 'radical new framework'. Win8 signals the end of the road for Silverlight apps in the browser, but then its importance here has been waning for some time, anyway, now that HTML5 has usurped its most compelling use case, streaming video. As Shawn Wildermuth and others have noted, if you're doing enterprise, desktop development with Silverlight then nothing much changes immediately, though it seems clear that ultimately Silverlight will die off in favor of a single WPF/XAML framework that supports those technologies that were pioneered on the phones and tablets. There is a mystery here. Is Silverlight dead, or merely repurposed? The more you look at Metro, the more it seems to resemble Silverlight. A lot of the philosophies underpinning Silverlight applications, such as the fundamentally asynchronous nature of the design, have moved wholesale into Metro, along with most the Microsoft Silverlight dev team. As Simon Cooper points out, "Silverlight developers, already used to all the principles of sandboxing and separation, will have a much easier time writing Metro apps than desktop developers". Metro certainly has given the framework formerly known as Silverlight a new purpose. It has enabled Microsoft to bestow on Windows 8 a new "duality", as both a traditional desktop OS supporting 'legacy' Windows applications, and an OS that supports a new breed of application that can share functionality such as search, that understands, and can react to, the full range of gestures and screen-sizes, and has location-awareness. It's clear that Win8 is developed in the knowledge that the 'desktop computer' will soon be a very large, tilted, touch-screen monitor. Windows owes its new-found versatility to the lessons learned from Windows Phone, but it's developed for the big screen, and with full support for familiar .NET desktop apps as well as the new Metro apps. But the old mouse-driven Windows applications will soon look very passé, just as MSDOS character-mode applications did in the nineties. Cheers, Tony.

    Read the article

  • When done is not done

    - by Tony Davis
    Most developers and DBAs will know what it’s like to be asked to do "a quick tidy up" on a project that, on closer inspection, turns out to be a barely working prototype: as the cynical programmer says, "when you’re told that a project is 90% done, prepare for the next 90%". It is easy to convince a layperson that an application is complete just by using test data, and sticking to the workflow that the development team has implemented and tested. The application is ‘done’ only in the sense that the anticipated paths through the software features, using known data, are fully supported. Reality often strikes only when testers reveal its strange and erratic behavior in response to behavior from the end user that strays from the "ideal". The problem is this: how do we measure progress, accurately and objectively? Development methods such as Scrum or Kanban, when implemented rigorously, can mitigate these problems for developers, to some extent. They force a team to progress one small, but complete feature at a time, to find out how long it really takes for this feature to be "done done"; in other words done to the point where its performance and scalability is understood, it is tested for all conceivable edge cases and doesn’t break…it is ready for prime time. At that point, the team has a much more realistic idea of how long it will take them to really complete all the remaining features, and so how far away the end is. However, it is when software crosses team boundaries that we feel the limitations of such techniques. No matter how well drilled the development team is, problems will still arise if they don’t deploy frequently to a production environment. If they work feverishly for months on end before finally tossing the finished piece of software over the fence for the DBA to deploy to the "real world" then once again will dawn the realization that "done done" is still out of reach, as the DBA uncovers poorly code transactions, un-scalable queries, inefficient caching, and so on. By deploying regularly, end users will also have a much earlier opportunity to tell you how far what you implemented strayed from what they wanted. If you have a tale to tell, anonymized of course, of a "quick polish" project that turned out to be anything but, and what the major problems were, please do share it. Cheers, Tony.

    Read the article

  • IT Admin for Thrill Seekers

    - by Tony Davis
    A developer suggested to me recently that the life of the DBA was, surely, a dull one. My first reaction was indignation, but quickly followed by the thought that for many people excitement isn't necessarily the most desirable aspect of their job. It's true that some aspects of the DBA role seem guaranteed to quieten the pulse; in the days of tape backups, time must have slowed to eternity for the person whose job it was to oversee this process, placing tapes into secure containers, ensuring correct labeling, and.sorry, I drifted off there for a second. On the other hand, if you follow the adventures of the likes of Brent Ozar or Tom LaRock, you'd be forgiven for thinking that much of a database guy's time is spent, metaphorically, diving through plate glass windows in tight fitting underwear in order to extract grateful occupants from burning database applications. Alas it isn't true of the majority, but it isn't as dull as some people imagine, and is a helter-skelter ride compared with some other IT roles. Every IT department has people who toil away in shadowy corners doing quiet but mysterious tasks. When you ask them to explain what they do, you almost immediately want them to stop, but you hear enough to appreciate that these tasks are often absolutely vital to the smooth functioning of an IT organization. Compared with them, the DBAs are prima donnas. Here are a few nominations: Installation engineer - install all of the company's laptops and workstations, and software, deal with licensing, shipping and data entry.many organizations, especially those subject to tight regulation, would simply grind to a halt without their efforts. Localization engineer - Not quite software engineering, not quite translation, the job is to rebuild a product in a different language and make sure everything still works. QA Tester - firstly, I should say that the testers at Red Gate seem to me some of the most-fulfilled in the company. I refer here to the QA Tester whose job is more-or-less entirely to read a script, click some buttons and make sure the actual and expected values match. Configuration manager - for example, someone whose main job is to configure build environments so that devs can access their source code; assuredly necessary for the smooth functioning and productivity of the team, and hopefully well-paid. So what other sort of job in IT should one choose if the work of a DBA proves to be too exciting? Or are these roles secretly more exciting than many imagine? I invite you all to put forward your own suggestions. Cheers, Tony.

    Read the article

  • PASS 13 Dispatches: moving to the cloud

    - by Tony Davis
    PASS Summit 13, Day 1 keynote by Quentin Clarke and we're hearing about “redefiniing mission critical in the cloud”. With a move to the Windows Azure cloud comes the promise of capacity on demand, automatic HA, backups, patching and so on, as well as passing responsibility to MS for managing hardware, upgrades and so on. However, for many databases and applications the best route to the cloud is not necessarily obvious. For most, the path of least resistance is IaaS – SQL Server in a Azure VM. It removes the hardware burden but you still have to manage your databases and implementing HA for SQL Server is your responsibility. Also, scaling up comes at quite a cost – the biggest VM (8 CPU cores, 56 GB RAM, 16 1TB drives with 500 IOPS each) weighs in at over over $4500 per month. With PaaS, in the form of Windows SQL Database, you get a “3-copies replica set” so HA comes out-of the box, and removes the majority of the administration burden, but you are moving your database into a very different environment. For a start, it's a shared environment, with other customers using the same compute nodes in the cluster, and potentially even sharing the same database (multi-tenancy). Unless you pay for SQL DB Premium edition, the resources available for your workload will depends on how nicely others “play” in the shared environment. You'll potentially need to do a lot of tuning, and application rewriting to avoid throttling issues, optimising application-database communication to deal with increased latency between the two, and so on. You'll need aggressive application caching. You'll also need retry logic and to deal with (expected) node failure and the need to reconnect. In Tuesday's PASS Summit pre-con from the SQLCAT team, they spent a lot of time covering some of the telemetric techniques (collect into Azure storage the necessary monitoring data) to perform capacity planning, work out the hotspots and bottlenecks in your cloud applications. Tools like WAD (Windows Azure Diagnostics), performance counters SQL Database DMVs, and others, will be essential. Of course, to truly exploit the vast horizontal scaling that is available from the existence of thousands of compute nodes, you'll also need to need to consider how to “shard” your data so Azure can move it between nodes at will. Finding the right path to the Cloud isn't easy, but it's coming. I spoke to people one year ago who saw no real benefit in trying to move their infrastructure and databases to the cloud, but now at their company, it's the conversation that won't go away. Tony.  

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >