Search Results

Search found 17124 results on 685 pages for 'final cut pro'.

Page 625/685 | < Previous Page | 621 622 623 624 625 626 627 628 629 630 631 632  | Next Page >

  • Back Up to Tape the Way You Shop For Groceries

    - by rickramsey
    Imagine if this was how you shopped for groceries: From the end of the aisle sprint to the point where you reach the ketchup. Pull a bottle from the shelf and yell at the top of your lungs, “Got it!” Sprint back to the end of the aisle. Start again and sprint down the same aisle to the mustard, pull a bottle from the shelf and again yell for the whole store to hear, “Got it!” Sprint back to the end of the aisle. Repeat this procedure for every item you need in the aisle. Proceed to the next aisle and follow the same steps for the list of items you need from that aisle. Sounds ridiculous, doesn’t it? Not only is it horribly inefficient, it’s exhausting and can lead to wear out failures on your grocery cart, or worse, yourself. This is essentially how NetApp and some other applications write NDMP backups to tape. In the analogy, the ketchup and mustard are the files to be written, yelling “Got it!” is the equivalent of a sync mark at the end of a file, and the sprint back to the end of an aisle is the process most commonly called a “backhitch” where the drive has to back up on a tape to start writing again. Writing to tape in this way results in very slow tape drive performance and imposes unnecessary wear on the tape drive and the media, especially when writing small files. The good news is not all tape drives behave this way when writing small files. Unlike midrange LTO drives, Oracle’s StorageTek T10000D tape drive is designed to handle this scenario efficiently. The difference between the two drive types is that the T10000D drive gives you the ability to write files in a NetApp NDMP backup environment the way you would normally shop for groceries. With grocery shopping, you essentially stream through aisles picking up items as you go, and then after checking out, yell, “Got it!”, though you might do that last step silently. With the T10000D, it has a feature called the Tape Application Accelerator, which prevents the drive from having to stop after each file is written to notify NetApp or another application that the write was successful. When enabled in the T10000D tape drive, Tape Application Accelerator causes the tape drive to respond to tape mark and file sync commands differently than when disabled: A tape mark received by the tape drive is treated as a buffered tape mark. A file sync received by the tape drive is treated as a no op command. Since buffered tape marks and no op commands do not cause the tape drive to empty the contents of its buffer to tape and backhitch, the data is written to tape in significantly less time. Oracle has emulated NetApp environments with a number of different file sizes and found the following when comparing the T10000D with the Tape Application Accelerator enabled versus LTO6 tape drives. Notice how the T10000D is not only monumentally faster, but also remarkably consistent? In addition, the writing of the 50 GB of files is done without a single backhitch. The LTO6 drive, meanwhile, will perform as many as 3,800 backhitches! At the end of writing the entire set of files, the T10000D tape drive reports back to the application, in this case NetApp, that the write was successful via a tape mark. So if the Tape Application Accelerator dramatically improves performance and reliability, why wouldn’t you always have it enabled? The reason is because tape drive buffers are meant to be just temporary data repositories so in the event of a power loss, there could be data loss in certain environments for the files that resided in the buffer. Fortunately, we do have best practices depending on your environment to avoid this from happening. I highly recommend reading Maximizing Tape Performance with StorageTek T10000 Tape Drives (pdf) to decide which best practice is right for you. The white paper also digs deeper into the benefits of the Tape Application Accelerator. The white paper is free, and after downloading it you can decide for yourself whether you want to yell “Got it!” out loud or just silently to yourself. Customer Advisory Panel One final link: Oracle has started up a Customer Advisory Panel program to collect feedback from customers on their current experiences with Oracle products, as well as desires for future product development. If you would like to participate in the program, go to this link at oracle.com. photo taken on Idaho's Sacajewea Historic Biway by Rick Ramsey - Brian Zents Follow OTN on Blog | Facebook | Twitter | YouTube

    Read the article

  • Independence Day for Software Components &ndash; Loosening Coupling by Reducing Connascence

    - by Brian Schroer
    Today is Independence Day in the USA, which got me thinking about loosely-coupled “independent” software components. I was reminded of a video I bookmarked quite a while ago of Jim Weirich’s “Grand Unified Theory of Software Design” talk at MountainWest RubyConf 2009. I finally watched that video this morning. I highly recommend it. In the video, Jim talks about software connascence. The dictionary definition of connascence (con-NAY-sense) is: 1. The common birth of two or more at the same time 2. That which is born or produced with another. 3. The act of growing together. The brief Wikipedia page about Connascent Software Components says that: Two software components are connascent if a change in one would require the other to be modified in order to maintain the overall correctness of the system. Connascence is a way to characterize and reason about certain types of complexity in software systems. The term was introduced to the software world in Meilir Page-Jones’ 1996 book “What Every Programmer Should Know About Object-Oriented Design”. The middle third of that book is the author’s proposed graphical notation for describing OO designs. UML became the standard about a year later, so a revised version of the book was published in 1999 as “Fundamentals of Object-Oriented Design in UML”. Weirich says that the third part of the book, in which Page-Jones introduces the concept of connascence “is worth the price of the entire book”. (The price of the entire book, by the way, is not much – I just bought a used copy on Amazon for $1.36, so that was a pretty low-risk investment. I’m looking forward to getting the book and learning about connascence from the original source.) Meanwhile, here’s my summary of Weirich’s summary of Page-Jones writings about connascence: The stronger the form of connascence, the more difficult and costly it is to change the elements in the relationship. Some of the connascence types, ordered from weak to strong are: Connascence of Name Connascence of name is when multiple components must agree on the name of an entity. If you change the name of a method or property, then you need to change all references to that method or property. Duh. Connascence of name is unavoidable, assuming your objects are actually used. My main takeaway about connascence of name is that it emphasizes the importance of giving things good names so you don’t need to go changing them later. Connascence of Type Connascence of type is when multiple components must agree on the type of an entity. I assume this is more of a problem for languages without compilers (especially when used in apps without tests). I know it’s an issue with evil JavaScript type coercion. Connascence of Meaning Connascence of meaning is when multiple components must agree on the meaning of particular values, e.g that “1” means normal customer and “2” means preferred customer. The solution to this is to use constants or enums instead of “magic” strings or numbers, which reduces the coupling by changing the connascence form from “meaning” to “name”. Connascence of Position Connascence of positions is when multiple components must agree on the order of values. This refers to methods with multiple parameters, e.g.: eMailer.Send("[email protected]", "[email protected]", "Your order is complete", "Order completion notification"); The more parameters there are, the stronger the connascence of position is between the component and its callers. In the example above, it’s not immediately clear when reading the code which email addresses are sender and receiver, and which of the final two strings are subject vs. body. Connascence of position could be improved to connascence of type by replacing the parameter list with a struct or class. This “introduce parameter object” refactoring might be overkill for a method with 2 parameters, but would definitely be an improvement for a method with 10 parameters. This points out two “rules” of connascence:  The Rule of Degree: The acceptability of connascence is related to the degree of its occurrence. The Rule of Locality: Stronger forms of connascence are more acceptable if the elements involved are closely related. For example, positional arguments in private methods are less problematic than in public methods. Connascence of Algorithm Connascence of algorithm is when multiple components must agree on a particular algorithm. Be DRY – Don’t Repeat Yourself. If you have “cloned” code in multiple locations, refactor it into a common function.   Those are the “static” forms of connascence. There are also “dynamic” forms, including… Connascence of Execution Connascence of execution is when the order of execution of multiple components is important. Consumers of your class shouldn’t have to know that they have to call an .Initialize method before it’s safe to call a .DoSomething method. Connascence of Timing Connascence of timing is when the timing of the execution of multiple components is important. I’ll have to read up on this one when I get the book, but assume it’s largely about threading. Connascence of Identity Connascence of identity is when multiple components must reference the entity. The example Weirich gives is when you have two instances of the “Bob” Employee class and you call the .RaiseSalary method on one and then the .Pay method on the other does the payment use the updated salary?   Again, this is my summary of a summary, so please be forgiving if I misunderstood anything. Once I get/read the book, I’ll make corrections if necessary and share any other useful information I might learn.   See Also: Gregory Brown: Ruby Best Practices Issue #24: Connascence as a Software Design Metric (That link is failing at the time I write this, so I had to go to the Google cache of the page.)

    Read the article

  • Exploring the Excel Services REST API

    - by jamiet
    Over the last few years Analysis Services guru Chris Webb and I have been on something of a crusade to enable better access to data that is locked up in countless Excel workbooks that litter the hard drives of enterprise PCs. The most prominent manifestation of that crusade up to now has been a forum thread that Chris began on Microsoft Answers entitled Excel Web App API? Chris began that thread with: I was wondering whether there was an API for the Excel Web App? Specifically, I was wondering if it was possible (or if it will be possible in the future) to expose data in a spreadsheet in the Excel Web App as an OData feed, in the way that it is possible with Excel Services? Up to recently the last 10 words of that paragraph "in the way that it is possible with Excel Services" had completely washed over me however a comment on my recent blog post Thoughts on ExcelMashup.com (and a rant) by Josh Booker in which Josh said: Excel Services is a service application built for sharepoint 2010 which exposes a REST API for excel documents. We're looking forward to pros like you giving it a try now that Office365 makes sharepoint more easily accessible.  Can't wait for your future blog about using REST API to load data from Excel on Offce 365 in SSIS. made me think that perhaps the Excel Services REST API is something I should be looking into and indeed that is what I have been doing over the past few days. And you know what? I'm rather impressed with some of what Excel Services' REST API has to offer. Unfortunately Excel Services' REST API also has one debilitating aspect that renders this blog post much less useful than it otherwise would be; namely that it is not publicly available from the Excel Web App on SkyDrive. Therefore all I can do in this blog post is show you screenshots of what the REST API provides in Sharepoint rather than linking you directly to those REST resources; that's a great shame because one of the benefits of a REST API is that it is easily and ubiquitously demonstrable from a web browser. Instead I am hosting a workbook on Sharepoint in Office 365 because that does include Excel Services' REST API but, again, all I can do is show you screenshots. N.B. If anyone out there knows how to make Office-365-hosted spreadsheets publicly-accessible (i.e. without requiring a username/password) please do let me know (because knowing which forum on which to ask the question is an exercise in futility). In order to demonstrate Excel Services' REST API I needed some decent data and for that I used the World Tourism Organization Statistics Database and Yearbook - United Nations World Tourism Organization dataset hosted on Azure Datamarket (its free, by the way); this dataset "provides comprehensive information on international tourism worldwide and offers a selection of the latest available statistics on international tourist arrivals, tourism receipts and expenditure" and you can explore the data for yourself here. If you want to play along at home by viewing the data as it exists in Excel then it can be viewed here. Let's dive in.   The root of Excel Services' REST API is the model resource which resides at: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model Note that this is true for every workbook hosted in a Sharepoint document library - each Excel workbook is a RESTful resource. (Update: Mark Stacey on Twitter tells me that "It's turned off by default in onpremise Sharepoint (1 tickbox to turn on though)". Thanks Mark!) The data is provided as an ATOM feed but I have Firefox's feed reading ability turned on so you don't see the underlying XML goo. As you can see there are four top level resources, Ranges, Charts, Tables and PivotTables; exploring one of those resources is where things get interesting. Let's take a look at the Tables Resource: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Tables Our workbook contains only one table, called ‘Table1’ (to reiterate, you can explore this table yourself here). Viewing that table via the REST API is pretty easy, we simply append the name of the table onto our previous URI: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Tables('Table1') As you can see, that quite simply gives us a representation of the data in that table. What you cannot see from this screenshot is that this is pure HTML that is being served up; that is all well and good but actually we can do more interesting things. If we specify that the data should be returned not as HTML but as: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Tables('Table1')?$format=image then that data comes back as a pure image and can be used in any web page where you would ordinarily use images. This is the thing that I really like about Excel Services’ REST API – we can embed an image in any web page but instead of being a copy of the data, that image is actually live – if the underlying data in the workbook were to change then hitting refresh will show a new image. Pretty cool, no? The same is true of any Charts or Pivot Tables in your workbook - those can be embedded as images too and if the underlying data changes, boom, the image in your web page changes too. There is a lot of data in the workbook so the image returned by that previous URI is too large to show here so instead let’s take a look at a different resource, this time a range: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Ranges('Data!A1|C15') That URI returns cells A1 to C15 from a worksheet called “Data”: And if we ask for that as an image again: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Ranges('Data!A1|C15')?$format=image Were this image resource not behind a username/password then this would be a live image of the data in the workbook as opposed to one that I had to copy and upload elsewhere. Nonetheless I hope this little wrinkle doesn't detract from the inate value of what I am trying to articulate here; that an existing image in a web page can be changed on-the-fly simply by inserting some data into an Excel workbook. I for one think that that is very cool indeed! I think that's enough in the way of demo for now as this shows what is possible using Excel Services' REST API. Of course, not all features work quite how I would like and here is a bulleted list of some of my more negative feedback: The URIs are pig-ugly. Are "_vti_bin" & "ExcelRest.aspx" really necessary as part of the URI? Would this not be better: http://server/Documents/TourismExpenditureInMillionsOfUSD.xlsx/Model/Tables(‘Table1’) That URI provides the necessary addressability and is a lot easier to remember. Discoverability of these resources is not easy, we essentially have to handcrank a URI ourselves. Take the example of embedding a chart into a blog post - would it not be better if I could browse first through the document library to an Excel workbook and THEN through the workbook to the chart/range/table that I am interested in? Call it a wizard if you like. That would be really cool and would, I am sure, promote this feature and cut down on the copy-and-paste disease that the REST API is meant to alleviate. The resources that I demonstrated can be returned as feeds as well as images or HTML simply by changing the format parameter to ?$format=atom however for some inexplicable reason they don't return OData and no-one on the Excel Services team can tell me why (believe me, I have asked). $format is an OData parameter however other useful parameters such as $top and $filter are not supported. It would be nice if they were. Although I haven't demonstrated it here Excel Services' REST API does provide a makeshift way of altering the data by changing the value of specific cells however what it does not allow you to do is add new data into the workbook. Google Docs allows this and was one of the motivating factors for Chris Webb's forum post that I linked to above. None of this works for Excel workbooks hosted on SkyDrive This blog post is as long as it needs to be for a short introduction so I'll stop now. If you want to know more than I recommend checking out a few links: Excel Services REST API documentation on MSDNSo what does REST on Excel Services look like??? by Shahar PrishExcel Services in SharePoint 2010 REST API Syntax by Christian Stich. Any thoughts? Let's hear them in the comments section below! @Jamiet 

    Read the article

  • How to Increase the VMWare Boot Screen Delay

    - by Trevor Bekolay
    If you’ve wanted to try out a bootable CD or USB flash drive in a virtual machine environment, you’ve probably noticed that VMWare’s offerings make it difficult to change the boot device. We’ll show you how to change these options. You can do this either for one boot, or permanently for a particular virtual machine. Even experienced users of VMWare Player or Workstation may not recognize the screen above – it’s the virtual machine’s BIOS, which in most cases flashes by in the blink of an eye. If you want to boot up the virtual machine with a CD or USB key instead of the hard drive, then you’ll need more than an eye’s-blink to press Escape and bring up the Boot Menu. Fortunately, there is a way to introduce a boot delay that isn’t exposed in VMWare’s graphical interface – you have to edit the virtual machine’s settings file (a .vmx file) manually. Editing the Virtual Machine’s .vmx Find the .vmx file that contains the settings for your virtual machine. You chose a location for this when you created the virtual machine – in Windows, the default location is a folder called My Virtual Machines in your My Documents folder. In VMWare Workstation, the location of the .vmx file is listed on the virtual machine’s tab. If in doubt, search your hard drive for .vmx files. If you don’t want to use Windows default search, an awesome utility that locates files instantly is Everything. Open the .vmx file with any text editor. Somewhere in this file, enter in the following line… save the file, then close out of the text editor: bios.bootdelay = 20000 This will introduce a 20 second delay when the virtual machine loads up, giving you plenty of time to press the Escape button and access the boot menu. The number in this line is just a value in milliseconds, so for a five second boot delay, enter 5000, and so on. Change Boot Options Temporarily Now, when you boot up your virtual machine, you’ll have plenty of time to enter one of the keystrokes listed at the bottom of the BIOS screen on boot-up. Press Escape to bring up the Boot Menu. This allows you to select a different device to boot from – like a CD drive. Your selection will be forgotten the next time you boot up this virtual machine. Change Boot Options Permanently When the BIOS screen comes up, press F2 to enter the BIOS Setup menu. Switch to the Boot tab, and change the ordering of the items by pressing the “+” key to move items up on the list, and the “-” key to move items down the list. We’ve switched the order so that the CD-ROM Drive boots first. Once you make this change permanent, you may want to re-edit the .vmx file to remove the boot delay. Boot from a USB Flash Drive One thing that is noticeably missing from the list of boot options is a USB device. VMWare’s BIOS just does not allow this, but we can get around that limitation using the PLoP Boot Manager that we’ve previously written about. And as a bonus, since everything is virtual anyway, there’s no need to actually burn PLoP to a CD. Open the settings for the virtual machine you want to boot with a USB drive. Click on Add… at the bottom of the settings screen, and select CD/DVD Drive. Click Next. Click the Use ISO Image radio button, and click Next. Browse to find plpbt.iso or plpbtnoemul.iso from the PLoP zip file. Ensure that Connect at power on is checked, and then click Finish. Click OK on the main Virtual Machine Settings page. Now, if you use the steps above to boot using that CD/DVD drive, PLoP will load, allowing you to boot from a USB drive! Conclusion We’re big fans of VMWare Player and Workstation, as they let us try out a ton of geeky things without worrying about harming our systems. By introducing a boot delay, we can add bootable CDs and USB drives to the list of geeky things we can try out. Download PLoP Boot Manager Similar Articles Productive Geek Tips How To Switch to Console Mode for Ubuntu VMware GuestHack: Turn Off Debug Mode in VMWare Workstation 6 BetaStart Your Computer More Quickly by Delaying the Startup of a Service in VistaEnable Hidden BootScreen in Windows VistaEnable Copy and Paste from Ubuntu VMware Guest TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error

    Read the article

  • Juniper Strategy, LLC is hiring SharePoint Developers&hellip;

    - by Mark Rackley
    Isn’t everybody these days? It seems as though there are definitely more jobs than qualified devs these days, but yes, we are looking for a few good devs to help round out our burgeoning SharePoint team. Juniper Strategy is located in the DC area, however we will consider remote devs for the right fit. This is your chance to get in on the ground floor of a bright company that truly “gets it” when it comes to SharePoint, Project Management, and Information Assurance. We need like-minded people who “get it”, enjoy it, and who are looking for more than just a job. We have government and commercial opportunities as well as our own internal product that has a bright future of its own. Our immediate needs are for SharePoint .NET developers, but feel free to submit your resume for us to keep on file as it looks as though we’ll need several people in the coming months. Please email us your resume and salary requirements to [email protected] Below are our official job postings. Thanks for stopping by, we look forward to  hearing from you. Senior SharePoint .NET Developer Senior developer will focus on design and coding of custom, end-to-end business process solutions within the SharePoint framework. Senior developer with the ability to serve as a senior developer/mentor and manage day-to-day development tasks. Work with business consultants and clients to gather requirements to prepare business functional specifications. Analyze and recommend technical/development alternative paths based on business functional specifications. For selected development path, prepare technical specification and build the solution. Assist project manager with defining development task schedule and level-of-effort. Lead technical solution deployment. Job Requirements Minimum of 7 years experience in agile development, with at least 3 years of SharePoint-related development experience (SPS, SharePoint 2007/2010, WSS2-4). Thorough understanding of and demonstrated experience in development under the SharePoint Object Model, with focus on the WSS 3.0 foundation (MOSS 2007 Standard/Enterprise, Project Server 2007). Experience with using multiple data sources/repositories for database CRUD activities, including relational databases, SAP, Oracle e-Business. Experience with designing and deploying performance-based solutions in SharePoint for business processes that involve a very large number of records. Experience designing dynamic dashboards and mashups with data from multiple sources (internal to SharePoint as well as from external sources). Experience designing custom forms to facilitate user data entry, both with and without leveraging Forms Services. Experience building custom web part solutions. Experience with designing custom solutions for processing underlying business logic requirements including, but not limited to, SQL stored procedures, C#/ASP.Net workflows/event handlers (including timer jobs) to support multi-tiered decision trees and associated computations. Ability to create complex solution packages for deployment (e.g., feature-stapled site definitions). Must have impeccable communication skills, both written and verbal. Seeking a "tinkerer"; proactive with a thirst for knowledge (and a sense of humor). A US Citizen is required, and need to be able to pass NAC/E-Verify. An active Secret clearance is preferred. Applicants must pass a skills assessment test. MCP/MCTS or comparable certification preferred. Salary & Travel Negotiable SharePoint Project Lead Define project task schedule, work breakdown structure and level-of-effort. Serve as principal liaison to the customer to manage deliverables and expectations. Day-to-day project and team management, including preparation and maintenance of project plans, budgets, and status reports. Prepare technical briefings and presentation decks, provide briefs to C-level stakeholders. Work with business consultants and clients to gather requirements to prepare business functional specifications. Analyze and recommend technical/development alternative paths based on business functional specifications. The SharePoint Project Lead will be working with SharePoint architects and system owners to perform requirements/gap analysis and develop the underlying functional specifications. Once we have functional specifications as close to "final" as possible, the Project Lead will be responsible for preparation of the associated technical specification/development blueprint, along with assistance in preparing IV&V/test plan materials with support from other team members. This person will also be responsible for day-to-day management of "developers", but is also expected to engage in development directly as needed.  Job Requirements Minimum 8 years of technology project management across the software development life-cycle, with a minimum of 3 years of project management relating specifically to SharePoint (SPS 2003, SharePoint2007/2010) and/or Project Server. Thorough understanding of and demonstrated experience in development under the SharePoint Object Model, with focus on the WSS 3.0 foundation (MOSS 2007 Standard/Enterprise, Project Server 2007). Ability to interact and collaborate effectively with team members and stakeholders of different skill sets, personalities and needs. General "development" skill set required is a fundamental understanding of MOSS 2007 Enterprise, SP1/SP2, from the top-level of skinning to the core of the SharePoint object model. Impeccable communication skills, both written and verbal, and a sense of humor are required. The projects will require being at a client site at least 50% of the time in Washington DC (NW DC) and Maryland (near Suitland). A US Citizen is required, and need to be able to pass NAC/E-Verify. An active Secret clearance is preferred. PMP certification, PgMP preferred. Salary & Travel Negotiable

    Read the article

  • Tutorial: Criando um Componente para o UCM

    - by Denisd
    Então você já instalou o UCM, seguindo o tutorial: http://blogs.oracle.com/ecmbrasil/2009/05/tutorial_de_instalao_do_ucm.html e também já fez o hands-on: http://blogs.oracle.com/ecmbrasil/2009/10/tutorial_de_ucm.html e agora quer ir além do básico? Quer começar a criar funcionalidades para o UCM? Quer se tornar um desenvolvedor do UCM? Quer criar o Content Server à sua imagem e semelhança?! Pois hoje é o seu dia de sorte! Neste tutorial, iremos aprender a criar um componente para o Content Server. O nosso primeiro componente, embora não seja tão simples, será feito apenas com recursos do Content Server. Em um futuro tutorial, iremos aprender a usar classes java como parte de nossos componentes. Neste tutorial, vamos desenvolver um recurso de Favoritos, aonde os usuários poderão marcar determinados documentos como seus Favoritos, e depois consultar estes documentos em uma lista. Não iremos montar o componente com todas as suas funcionalidades, mas com o que vocês verão aqui, será tranquilo aprimorar este componente, inclusive para ambientes de produção. Componente MyFavorites Algumas características do nosso componente favoritos: - Por motivos de espaço, iremos montar este componente de uma forma “rápida e crua”, ou seja, sem seguir necessariamente as melhores práticas de desenvolvimento de componentes. Para entender melhor a prática de desenvolvimento de componentes, recomendo a leitura do guia Working With Components. - Ele será desenvolvido apenas para português-Brasil. Outros idiomas podem ser adicionados posteriormente. - Ele irá apresentar uma opção “Adicionar aos Favoritos” no menu “Content Actions” (tela Content Information), para que o usuário possa definir este arquivo como um dos seus favoritos. - Ao clicar neste link, o usuário será direcionado à uma tela aonde ele poderá digitar um comentário sobre este favorito, para facilitar a leitura depois. - Os favoritos ficarão salvos em uma tabela de banco de dados que iremos criar como parte do componente - A aba “My Content Server” terá uma opção nova chamada “Meus Favoritos”, que irá trazer uma tela que lista os favoritos, permitindo que o usuário possa deletar os links - Alguns recursos ficarão de fora deste exercício, novamente por motivos de espaço. Mas iremos listar estes recursos ao final, como exercícios complementares. Recursos do nosso Componente O componente Favoritos será desenvolvido com alguns recursos. Vamos conhecer melhor o que são estes recursos e quais são as suas funções: - Query: Uma query é qualquer atividade que eu preciso executar no banco, o famoso CRUD: Criar, Ler, Atualizar, Deletar. Existem diferentes jeitos de chamar a query, dependendo do propósito: Select Query: executa um comando SQL, mas descarta o resultado. Usado apenas para testar se a conexão com o banco está ok. Não será usado no nosso exercício. Execute Query: executa um comando SQL que altera informações do banco. Pode ser um INSERT, UPDATE ou DELETE. Descarta os resultados. Iremos usar Execute Query para criar, alterar e excluir os favoritos. Select Cache Query: executa um comando SQL SELECT e armazena os resultados em um ResultSet. Este ResultSet retorna como resultado do serviço e pode ser manipulado em IDOC, Java ou outras linguagens. Iremos utilizar Select Cache Query para retornar a lista de favoritos de um usuário. - Service: Os serviços são os responsáveis por executar as queries (ou classes java, mas isso é papo para um outro tutorial...). O serviço recebe os parâmetros de entrada, executa a query e retorna o ResultSet (no caso de um SELECT). Os serviços podem ser executados através de templates, páginas IDOC, outras aplicações (através de API), ou diretamente na URL do browser. Neste exercício criaremos serviços para Criar, Editar, Deletar e Listar os favoritos de um usuário. - Template: Os templates são as interfaces gráficas (páginas) que serão apresentadas aos usuários. Por exemplo, antes de executar o serviço que deleta um documento do favoritos, quero que o usuário veja uma tela com o ID do Documento e um botão Confirma, para que ele tenha certeza que está deletando o registro correto. Esta tela pode ser criada como um template. Neste exercício iremos construir templates para os principais serviços, além da página que lista todos os favoritos do usuário e apresenta as ações de editar e deletar. Os templates nada mais são do que páginas HTML com scripts IDOC. A nossa sequência de atividades para o desenvolvimento deste componente será: - Criar a Tabela do banco - Criar o componente usando o Component Wizard - Criar as Queries para inserir, editar, deletar e listar os favoritos - Criar os Serviços que executam estas Queries - Criar os templates, que são as páginas que irão interagir com os usuários - Criar os links, na página de informações do conteúdo e no painel My Content Server Pois bem, vamos começar! Confira este tutorial na íntegra clicando neste link: http://blogs.oracle.com/ecmbrasil/Tutorial_Componente_Banco.pdf   Happy coding!  :-)

    Read the article

  • Goodby jQuery Templates, Hello JsRender

    - by SGWellens
    A funny thing happened on my way to the jQuery website, I blinked and a feature was dropped: jQuery Templates have been discontinued. The new pretender to the throne is JsRender. jQuery Templates looked pretty useful when they first came out. Several articles were written about them but I stayed away because being on the bleeding edge of technology is not a productive place to be. I wanted to wait until it stabilized…in retrospect, it was a serendipitous decision. This time however, I threw all caution to the wind and took a close look at JSRender. Why? Maybe I'm having a midlife crisis; I'll go motorcycle shopping tomorrow. Caveat, here is a message from the site: Warning: JsRender is not yet Beta, and there may be frequent changes to APIs and features in the coming period. Fair enough, we've been warned. The first thing we need is some data to render. Below is some JSON formatted data. Typically this will come from an asynchronous call to a web service. For simplicity, I hard coded a variable:     var Golfers = [         { ID: "1", "Name": "Bobby Jones", "Birthday": "1902-03-17" },         { ID: "2", "Name": "Sam Snead", "Birthday": "1912-05-27" },         { ID: "3", "Name": "Tiger Woods", "Birthday": "1975-12-30" }         ]; We also need some templates, I created two. Note: The script blocks have the id property set. They are needed so JsRender can locate them.     <script id="GolferTemplate1" type="text/html">         {{=ID}}: <b>{{=Name}}</b> <i>{{=Birthday}}</i> <br />     </script>       <script id="GolferTemplate2" type="text/html">         <tr>             <td>{{=ID}}</td>             <td><b>{{=Name}}</b></td>             <td><i>{{=Birthday}}</i> </td>         </tr>     </script> Including the correct JavaScript files is trivial:     <script src="Scripts/jquery-1.7.js" type="text/javascript"></script>     <script src="Scripts/jsrender.js" type="text/javascript"></script> Of course we need some place to render the output:     <div id="GolferDiv"></div><br />     <table id="GolferTable"></table> The code is also trivial:     function Test()     {         $("#GolferDiv").html($("#GolferTemplate1").render(Golfers));         $("#GolferTable").html($("#GolferTemplate2").render(Golfers));           // you can inspect the rendered html if there are poblems.         // var html = $("#GolferTemplate2").render(Golfers);     } And here's what it looks like with some random CSS formatting that I had laying around.    Not bad, I hope JsRender lasts longer than jQuery Templates. One final warning, a lot of jQuery code is ugly, butt-ugly. If you do look inside the jQuery files, you may want to cover your keyboard with some plastic in case you get vertigo and blow chunks. I hope someone finds this useful. Steve Wellens CodeProject

    Read the article

  • Dual Boot Oracle Solaris 11/11 and Linux (Ubuntu 11.10/grub2)

    - by HartmutStreppel
    After having worked with Open Solaris on my laptop first, then with an upgrade to Oracle Solaris 11 Express, I finally did a fresh install of Oracle Solaris 11/11, when it became available. I am not a big fan of upgrades as I know that I am not the perfect administrator and my system gets spoiled with unclean configurations, outdated packages and wrong settings that cannot be reversed. So I prefer to start from scratch. Especially with Oracle Solaris 11 I wanted to have a system just like a customer would have it in production. The installation was smooth - more or less, if I had only read the documentation a bit better in advance. For a number of reasons I prefer a dual boot system. The most important one is, that especially with mobile devices you often run into network problems. And you have a hard time figuring out where the problem is: in your laptop hardware, in the OS you are running, or really within the network. If you have an alternate OS to boot, you can exclude the OS and your hardware. This makes you feel better. The second OS should be a Linux variant - and for some not so obvious reason I decided to go with the latest Ubuntu release (11.10). It replaced a very old Open Suse installation that had not been booted for a while. I knew that it was probably best to install Ubuntu first and then Oracle Solaris 11, as this would put the right boot information for Oracle Solaris  into the MBR and onto the root partition. But then, how to enable dual boot with the 2 OSes. Searching the web one mainly finds information about dual boot of: Linux and Linux Linux and Windows I do not want to explain which wrong configurations I worked through, but I prefer to explain the final setup, which is extremely simple, and I am wondering why this is not covered as the easiest solution for most dual boot setups. I use chainloader from and to both OS'es, with the only disadvantage that I have to confirm two grub menus each time I want to boot the "other" OS. Still there were some hurdles to jump over: Ubuntu did not like getting its boot blocks being placed on the partition instead of the disk; I must admit that I do not fully understand why. But using the --force option you could get that done Ubuntu needs an active partition; that was easy to achieve grub2 uses a different numbering scheme for the partitions. That is in the docs, if you read them. BTW: The usual disclaimer is valid. There is  no guarantee that what I describe works or works well. Please back up your data carefully before trying any of this. So, Oracle Solaris 11 is installed on the first partition and Ubuntu on the third. With Ubtuntu things initially were a bit more complicated, as I did not know how to boot it. And the live CD did not offer the capability to boot the on-disk image (at least I did not find it). So I booted the live CD, mounted the Ubuntu installation at /mnt and wrote the boot blocks into the partition. This is something that does not seem to be recommended, at least grub-install refrained from doing what I intended. After a bit more research I was bold enough to use the --force option and wrote the boot blocks to /dev/sda3 using grub-install --boot-directory=/mnt/boot --force --no-floppy /dev/sda3 So, I now had a system with the Solaris boot loader in the MBR, Solaris specific boot blocks on the Solaris root partition and Ubuntu specific boot blocks in the Ubuntu partition. I just had to chain them together and I was done. Oracle Solaris 11: I have added the following lines to /rpool/boot/grub/menu.lst (be aware of the /rpool!!!!) title Ubuntu 11.10root (hd0,2)makeactivechainloader +1boot The Ubuntu root file system sits on the third partition (/dev/sda3). Ubuntu: I have added the following lines to /etc/grub.d/40_custom: menuentry "Solaris 11/11" {      set root=(hd0,1)      chainloader +1} Two things need to be mentioned: a) grub2 starts numbering partitions with 1; so my /dev/sda1 is partition 1. b) Oracle Solaris boots without the partition being made active (btw: the command to make a partition active with grub2 is "parttool (hd0,1) boot+", which currently does not work for me). As debugging grub is a bit complicated, I used the grub CLI to perform some tests and also used a tool, that I found on sourceforge.net that was able to prepare a list of all boot loaders on all partitions. This told me that the basic setup was correct. Unfortunately I lost it in the live CD environment. I hope this is helpful for some of the readers.Hartmut

    Read the article

  • Building the Elusive Windows Phone Panorama Control

    When the Windows Phone 7 Developer SDK was released a couple of weeks ago at MIX10 many people noticed the SDK doesnt include a template for a Panorama control.   Here at Clarity we decided to build our own Panorama control for use in some of our prototypes and I figured I would share what we came up with. There have been a couple of implementations of the Panorama control making their way through the interwebs, but I didnt think any of them really nailed the experience that is shown in the simulation videos.   One of the key design principals in the UX Guide for Windows Phone 7 is the use of motion.  The WP7 OS is fairly stripped of extraneous design elements and makes heavy use of typography and motion to give users the necessary visual cues.  Subtle animations and wide layouts help give the user a sense of fluidity and consistency across the phone experience.  When building the panorama control I was fairly meticulous in recreating the motion as shown in the videos.  The effect that is shown in the application hubs of the phone is known as a Parallax Scrolling effect.  This this pseudo-3D technique has been around in the computer graphics world for quite some time. In essence, the background images move slower than foreground images, creating an illusion of depth in 2D.  Here is an example of the traditional use: http://www.mauriciostudio.com/.  One of the animation gems I've learned while building interactive software is the follow animation.  The premise is straightforward: instead of translating content 1:1 with the interaction point, let the content catch up to the mouse or finger.  The difference is subtle, but the impact on the smoothness of the interaction is huge.  That said, it became the foundation of how I achieved the effect shown below.   Source Code Available HERE Before I briefly describe the approach I took in creating this control..and Ill add some **asterisks ** to the code below as my coding skills arent up to snuff with the rest of my colleagues.  This code is meant to be an interpretation of the WP7 panorama control and is not intended to be used in a production application.  1.  Layout the XAML The UI consists of three main components :  The background image, the Title, and the Content.  You can imagine each  these UI Elements existing on their own plane with a corresponding Translate Transform to create the Parallax effect.  2.  Storyboards + Procedural Animations = Sexy As I mentioned above, creating a fluid experience was at the top of my priorities while building this control.  To recreate the smooth scroll effect shown in the video we need to add some place holder storyboards that we can manipulate in code to simulate the inertia and snapping.  Using the easing functions built into Silverlight helps create a very pleasant interaction.    3.  Handle the Manipulation Events With Silverlight 3 we have some new touch event handlers.  The new Manipulation events makes handling the interactivity pretty straight forward.  There are two event handlers that need to be hooked up to enable the dragging and motion effects: the ManipulationDelta event :  (the most relevant code is highlighted in pink) Here we are doing some simple math with the Manipulation Deltas and setting the TO values of the animations appropriately. Modifying the storyboards dynamically in code helps to create a natural feel.something that cant easily be done with storyboards alone.   And secondly, the ManipulationCompleted event:  Here we take the Final Velocities from the Manipulation Completed Event and apply them to the Storyboards to create the snapping and scrolling effects.  Most of this code is determining what the next position of the viewport will be.  The interesting part (shown in pink) is determining the duration of the animation based on the calculated velocity of the flick gesture.  By using velocity as a variable in determining the duration of the animation we can produce a slow animation for a soft flick and a fast animation for a strong flick. Challenges to the Reader There are a couple of things I didnt have time to implement into this control.  And I would love to see other WPF/Silverlight approaches.  1.  A good mechanism for deciphering when the user is manipulating the content within the panorama control and the panorama itself.   In other words, being able to accurately determine what is a flick and what is click. 2.  Dynamically Sizing the panorama control based on the width of its content.  Right now each control panel is 400px, ideally the Panel items would be measured and then panorama control would update its size accordingly.  3.  Background and content wrapping.  The WP7 UX guidelines specify that the content and background should wrap at the end of the list.  In my code I restrict the drag at the ends of the list (like the iPhone).  It would be interesting to see how this would effect the scroll experience.     Well, Its been fun building this control and if you use it Id love to know what you think.  You can download the Source HERE or from the Expression Gallery  Erik Klimczak  | [email protected] | twitter.com/eklimczDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Create a Slide Show in Windows 7 Media Center

    - by DigitalGeekery
    Are you looking for a nice way to create and display a slide show from your photo collection? Today we’ll show you how to create a slide show, how to add music to it, and watch it from the comfort of your couch in Windows 7 Media Center. Create Slide Show Launch Windows 7 Media Center and click on the Picture Library tile found under Pictures and Videos.   In the Pictures Library, scroll across to slide shows and click on Create Slide show.   Enter a name for the slide show and click Next.   If you are using a Windows Media Center remote, click on the OK button to bring up the onscreen keyboard. Use the directional buttons to navigate across the keyboard and press OK to select each letter. Click Done when finished. Select Picture Library and click Next. Select the pictures to include in your slide show. If using a remote, navigate through the images and press OK to select. If you are using a mouse, simply click on the selections. When you are finished, click Next.    Now, we can review and edit the slide show. Click the up or down pointing arrows to move pictures up and down in the order.  (more intuitive titles would be helpful in this case as opposed to the randomly generated titles in the example below) If you are finished, click Create. You can also choose to go back and add music to your slide show. (or even more pictures) We’ll take a look at adding some music in our example. Click on the Add More button.   Add Music to Your Slide Show Here we’ll select Music Library to add a song. Click Next.   You’ll now be able to browse your Music Library to select songs for your slide show. Select your songs and click Next.   When you are finished adding Music and Pictures click Create.   Once your slide show is saved, you can play it any time by going to clicking on slide shows in the Picture Library, then selecting the slide show title. Select play slide show when you’re ready to enjoy your new production.   If you ever want to edit or delete the slide show, select it in the Picture Library, and scroll to Actions. You’ll see those option under additional commands. You have the option to Edit Slide Show, Burn a CD/DVD, or Delete. Editing Slide Show Settings Within Media Center, go to Tasks… Click on Pictures…   Then choose Slide Shows. From the Slide Show settings you have the option to Show pictures in random order, Show picture information, Show song information, and Use Pan and zoom effect. You can also adjust the length of time to display each picture, and change the background color. Be sure to click Save to apply and changes before exiting. If you choose to show picture information, the picture title, date, and star rating will be displayed in the top right.   If your slide show is accompanied by music and you choose to show song information, you will get a translucent overlay for a few seconds at the beginning of each song to indicate the song, album, and artist. One of the really cool things about creating a slide show in Windows 7 Media Center is you can complete the entire process using just a Media Center remote. Can’t get enough slide shows? Check out how to turn your desktop into a picture slide show in Windows 7. Similar Articles Productive Geek Tips Using Netflix Watchnow in Windows Vista Media Center (Gmedia)Add Color Coding to Windows 7 Media Center Program GuideIntegrate Boxee with Media Center in Windows 7Schedule Updates for Windows Media CenterTurn Your Desktop into a Picture Slideshow in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job? Find Downloads and Add-ins for Outlook Recycle ! Find That Elusive Icon with FindIcons

    Read the article

  • JavaScript Intellisense Improvements with VS 2010

    - by ScottGu
    This is the twentieth in a series of blog posts I’m doing on the upcoming VS 2010 and .NET 4 release.  Today’s blog post covers some of the nice improvements coming with JavaScript intellisense with VS 2010 and the free Visual Web Developer 2010 Express.  You’ll find with VS 2010 that JavaScript Intellisense loads much faster for large script files and with large libraries, and that it now provides statement completion support for more advanced scenarios compared to previous versions of Visual Studio. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Improved JavaScript Intellisense Providing Intellisense for a dynamic language like JavaScript is more involved than doing so with a statically typed language like VB or C#.  Correctly inferring the shape and structure of variables, methods, etc is pretty much impossible without pseudo-executing the actual code itself – since JavaScript as a language is flexible enough to dynamically modify and morph these things at runtime.  VS 2010’s JavaScript code editor now has the smarts to perform this type of pseudo-code execution as you type – which is how its intellisense completion is kept accurate and complete.  Below is a simple walkthrough that shows off how rich and flexible it is with the final release. Scenario 1: Basic Type Inference When you declare a variable in JavaScript you do not have to declare its type.  Instead, the type of the variable is based on the value assigned to it.  Because VS 2010 pseudo-executes the code within the editor, it can dynamically infer the type of a variable, and provide the appropriate code intellisense based on the value assigned to a variable. For example, notice below how VS 2010 provides statement completion for a string (because we assigned a string to the “foo” variable): If we later assign a numeric value to “foo” the statement completion (after this assignment) automatically changes to provide intellisense for a number: Scenario 2: Intellisense When Manipulating Browser Objects It is pretty common with JavaScript to manipulate the DOM of a page, as well as work against browser objects available on the client.  Previous versions of Visual Studio would provide JavaScript statement completion against the standard browser objects – but didn’t provide much help with more advanced scenarios (like creating dynamic variables and methods).  VS 2010’s pseudo-execution of code within the editor now allows us to provide rich intellisense for a much broader set of scenarios. For example, below we are using the browser’s window object to create a global variable named “bar”.  Notice how we can now get intellisense (with correct type inference for a string) with VS 2010 when we later try and use it: When we assign the “bar” variable as a number (instead of as a string) the VS 2010 intellisense engine correctly infers its type and modifies statement completion appropriately to be that of a number instead: Scenario 3: Showing Off Because VS 2010 is psudo-executing code within the editor, it is able to handle a bunch of scenarios (both practical and wacky) that you throw at it – and is still able to provide accurate type inference and intellisense. For example, below we are using a for-loop and the browser’s window object to dynamically create and name multiple dynamic variables (bar1, bar2, bar3…bar9).  Notice how the editor’s intellisense engine identifies and provides statement completion for them: Because variables added via the browser’s window object are also global variables – they also now show up in the global variable intellisense drop-down as well: Better yet – type inference is still fully supported.  So if we assign a string to a dynamically named variable we will get type inference for a string.  If we assign a number we’ll get type inference for a number.  Just for fun (and to show off!) we could adjust our for-loop to assign a string for even numbered variables (bar2, bar4, bar6, etc) and assign a number for odd numbered variables (bar1, bar3, bar5, etc): Notice above how we get statement completion for a string for the “bar2” variable.  Notice below how for “bar1” we get statement completion for a number:   This isn’t just a cool pet trick While the above example is a bit contrived, the approach of dynamically creating variables, methods and event handlers on the fly is pretty common with many Javascript libraries.  Many of the more popular libraries use these techniques to keep the size of script library downloads as small as possible.  VS 2010’s support for parsing and pseudo-executing libraries that use these techniques ensures that you get better code Intellisense out of the box when programming against them. Summary Visual Studio 2010 (and the free Visual Web Developer 2010 Express) now provide much richer JavaScript intellisense support.  This support works with pretty much all popular JavaScript libraries.  It should help provide a much better development experience when coding client-side JavaScript and enabling AJAX scenarios within your ASP.NET applications. Hope this helps, Scott P.S. You can read my previous blog post on VS 2008’s JavaScript Intellisense to learn more about our previous JavaScript intellisense (and some of the scenarios it supported).  VS 2010 obviously supports all of the scenarios previously enabled with VS 2008.

    Read the article

  • SQL SERVER – Simple Example to Configure Resource Governor – Introduction to Resource Governor

    - by pinaldave
    Let us jump right away with question and answer mode. What is resource governor? Resource Governor is a feature which can manage SQL Server Workload and System Resource Consumption. We can limit the amount of CPU and memory consumption by limiting /governing /throttling on the SQL Server. Why is resource governor required? If there are different workloads running on SQL Server and each of the workload needs different resources or when workloads are competing for resources with each other and affecting the performance of the whole server resource governor is a very important task. What will be the real world example of need of resource governor? Here are two simple scenarios where the resource governor can be very useful. Scenario 1: A server which is running OLTP workload and various resource intensive reports on the same server. The ideal situation is where there are two servers which are data synced with each other and one server runs OLTP transactions and the second server runs all the resource intensive reports. However, not everybody has the luxury to set up this kind of environment. In case of the situation where reports and OLTP transactions are running on the same server, limiting the resources to the reporting workload it can be ensured that OTLP’s critical transaction is not throttled. Scenario 2: There are two DBAs in one organization. One DBA A runs critical queries for business and another DBA B is doing maintenance of the database. At any point in time the DBA A’s work should not be affected but at the same time DBA B should be allowed to work as well. The ideal situation is that when DBA B starts working he get some resources but he can’t get more than defined resources. Does SQL Server have any default resource governor component? Yes, SQL Server have two by default created resource governor component. 1) Internal –This is used by database engine exclusives and user have no control. 2) Default – This is used by all the workloads which are not assigned to any other group. What are the major components of the resource governor? Resource Pools Workload Groups Classification In simple words here is what the process of resource governor is. Create resource pool Create a workload group Create classification function based on the criteria specified Enable Resource Governor with classification function Let me further explain you the same with graphical image. Is it possible to configure resource governor with T-SQL? Yes, here is the code for it with explanation in between. Step 0: Here we are assuming that there are separate login accounts for Reporting server and OLTP server. /*----------------------------------------------- Step 0: (Optional and for Demo Purpose) Create Two User Logins 1) ReportUser, 2) PrimaryUser Use ReportUser login for Reports workload Use PrimaryUser login for OLTP workload -----------------------------------------------*/ Step 1: Creating Resource Pool We are creating two resource pools. 1) Report Server and 2) Primary OLTP Server. We are giving only a few resources to the Report Server Pool as described in the scenario 1 the other server is mission critical and not the report server. ----------------------------------------------- -- Step 1: Create Resource Pool ----------------------------------------------- -- Creating Resource Pool for Report Server CREATE RESOURCE POOL ReportServerPool WITH ( MIN_CPU_PERCENT=0, MAX_CPU_PERCENT=30, MIN_MEMORY_PERCENT=0, MAX_MEMORY_PERCENT=30) GO -- Creating Resource Pool for OLTP Primary Server CREATE RESOURCE POOL PrimaryServerPool WITH ( MIN_CPU_PERCENT=50, MAX_CPU_PERCENT=100, MIN_MEMORY_PERCENT=50, MAX_MEMORY_PERCENT=100) GO Step 2: Creating Workload Group We are creating two workloads each mapping to each of the resource pool which we have just created. ----------------------------------------------- -- Step 2: Create Workload Group ----------------------------------------------- -- Creating Workload Group for Report Server CREATE WORKLOAD GROUP ReportServerGroup USING ReportServerPool ; GO -- Creating Workload Group for OLTP Primary Server CREATE WORKLOAD GROUP PrimaryServerGroup USING PrimaryServerPool ; GO Step 3: Creating user defiled function which routes the workload to the appropriate workload group. In this example we are checking SUSER_NAME() and making the decision of Workgroup selection. We can use other functions such as HOST_NAME(), APP_NAME(), IS_MEMBER() etc. ----------------------------------------------- -- Step 3: Create UDF to Route Workload Group ----------------------------------------------- CREATE FUNCTION dbo.UDFClassifier() RETURNS SYSNAME WITH SCHEMABINDING AS BEGIN DECLARE @WorkloadGroup AS SYSNAME IF(SUSER_NAME() = 'ReportUser') SET @WorkloadGroup = 'ReportServerGroup' ELSE IF (SUSER_NAME() = 'PrimaryServerPool') SET @WorkloadGroup = 'PrimaryServerGroup' ELSE SET @WorkloadGroup = 'default' RETURN @WorkloadGroup END GO Step 4: In this final step we enable the resource governor with the classifier function created in earlier step 3. ----------------------------------------------- -- Step 4: Enable Resource Governer -- with UDFClassifier ----------------------------------------------- ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION=dbo.UDFClassifier); GO ALTER RESOURCE GOVERNOR RECONFIGURE GO Step 5: If you are following this demo and want to clean up your example, you should run following script. Running them will disable your resource governor as well delete all the objects created so far. ----------------------------------------------- -- Step 5: Clean Up -- Run only if you want to clean up everything ----------------------------------------------- ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION = NULL) GO ALTER RESOURCE GOVERNOR DISABLE GO DROP FUNCTION dbo.UDFClassifier GO DROP WORKLOAD GROUP ReportServerGroup GO DROP WORKLOAD GROUP PrimaryServerGroup GO DROP RESOURCE POOL ReportServerPool GO DROP RESOURCE POOL PrimaryServerPool GO ALTER RESOURCE GOVERNOR RECONFIGURE GO I hope this introductory example give enough light on the subject of Resource Governor. In future posts we will take this same example and learn a few more details. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Resource Governor

    Read the article

  • Improve the Quality of ePub eBooks with Sigil

    - by Matthew Guay
    Would you like to correct errors in your ePub formatted eBooks, or even split them into chapters and create a Table of Contents?  Here’s how you can with the free program Sigil. eBooks are increasingly popular with the rise of eBook readers and reading apps on mobile devices.  We recently showed you how to convert a PDF eBook to ePub format, but as you may have noticed, sometimes the converted file had some glitches or odd formatting.  Additionally, many of the many free ePub books available online from sources like the Project Guttenberg do not include a table of contents.  Sigil is a free application for Windows, OS X, and Linux that lets you edit ePub files, so let’s look at how you can use it to improve your eBooks. Note: Sigil took several moments to open files in our tests, and froze momentarily when we maximized the window.  Sigil is currently pre-release software in active development, so we would expect the bugs to be worked out in future versions.  As usual, only install if you’re comfortable testing pre-release software. Getting Started Download Sigil (link below), making sure to select the correct version for your computer.  Run the installer, and select your preferred setup language when prompted. After a moment the installer will appear; setup as normal. Launch Sigil when it’s finished installing.  It opens with a default blank ePub file, so you could actually start writing an eBook from scratch right here. Edit Your ePub eBooks Now you’re ready to edit your ePub books.  Click Open and browse to the file you want to edit. Now you can double-click any of the HTML or XHTML files on the left sidebar and edit them just like you would in Word. Or you can choose to view it in Code View and edit the actual HTML directly. The sidebar also gives you access to the other parts of the ePub file, such as Images and CSS styles. If your ePub file has a Table of Contents, you can edit it with Sigil as well.  Click Tools in the menu bar, and then select TOC Editor.  Strangely there is no way to create a new table of contents, but you can remove entries from existing one.   Convert TXT Files to ePub Many free eBooks online, especially older, out of copyright titles, are available in plain text format.  One problem with these files is that they usually use hard returns at the end of lines, so they don’t reflow to fill your screen efficiently. Sigil can easily convert these files to the more useful ePub format.  Open the text file in Sigil, and it will automatically reflow the text and convert it ePub.  As you can see in the screenshot below, the text in the eBook does not have hard line-breaks at the end of each line, and will be much more readable on mobile devices. Note that Sigil may take several moments opening the book, and may even become unresponsive while analyzing it.   Now you can edit your eBook, split it into chapters, or just save it as is.  Either way, make sure to select Save as to save your book as ePub format. Conclusion As mentioned before, Sigil seems to run slow at times, especially when editing large eBooks.  But it’s still a nice solution to edit and extend your ePub eBooks, and even convert plain text eBooks to the nicer ePub format.  Now you can make your eBooks work just like you want, and read them on your favorite device! If you feel comfortable editing HTML files, check out our article on how to edit ePub eBooks with your favorite HTML editor. Link Download Sigil from Google Code Download free ePub eBooks from Project Guttenberg Similar Articles Productive Geek Tips Edit ePub eBooks with Your Favorite HTML EditorConvert a PDF eBook to ePub Format for Your iPad, iPhone, or eReaderRead Mobi eBooks on Kindle for PCFriday Fun: Watch HD Video Content with MeevidPreview and Purchase Ebooks with Kindle for PC TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Get Your Team’s World Cup Schedule In Google Calendar Backup Drivers With Driver Magician TubeSort: YouTube Playlist Organizer XPS file format & XPS Viewer Explained Microsoft Office Web Apps Guide Know if Someone Accessed Your Facebook Account

    Read the article

  • SQL SERVER – SSIS Parameters in Parent-Child ETL Architectures – Notes from the Field #040

    - by Pinal Dave
    [Notes from Pinal]: SSIS is very well explored subject, however, there are so many interesting elements when we read, we learn something new. A similar concept has been Parent-Child ETL architecture’s relationship in SSIS. Linchpin People are database coaches and wellness experts for a data driven world. In this 40th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to understand SSIS Parameters in Parent-Child ETL Architectures. In this brief Notes from the Field post, I will review the use of SSIS parameters in parent-child ETL architectures. A very common design pattern used in SQL Server Integration Services is one I call the parent-child pattern.  Simply put, this is a pattern in which packages are executed by other packages.  An ETL infrastructure built using small, single-purpose packages is very often easier to develop, debug, and troubleshoot than large, monolithic packages.  For a more in-depth look at parent-child architectures, check out my earlier blog post on this topic. When using the parent-child design pattern, you will frequently need to pass values from the calling (parent) package to the called (child) package.  In older versions of SSIS, this process was possible but not necessarily simple.  When using SSIS 2005 or 2008, or even when using SSIS 2012 or 2014 in package deployment mode, you would have to create package configurations to pass values from parent to child packages.  Package configurations, while effective, were not the easiest tool to work with.  Fortunately, starting with SSIS in SQL Server 2012, you can now use package parameters for this purpose. In the example I will use for this demonstration, I’ll create two packages: one intended for use as a child package, and the other configured to execute said child package.  In the parent package I’m going to build a for each loop container in SSIS, and use package parameters to pass in a value – specifically, a ClientID – for each iteration of the loop.  The child package will be executed from within the for each loop, and will create one output file for each client, with the source query and filename dependent on the ClientID received from the parent package. Configuring the Child and Parent Packages When you create a new package, you’ll see the Parameters tab at the package level.  Clicking over to that tab allows you to add, edit, or delete package parameters. As shown above, the sample package has two parameters.  Note that I’ve set the name, data type, and default value for each of these.  Also note the column entitled Required: this allows me to specify whether the parameter value is optional (the default behavior) or required for package execution.  In this example, I have one parameter that is required, and the other is not. Let’s shift over to the parent package briefly, and demonstrate how to supply values to these parameters in the child package.  Using the execute package task, you can easily map variable values in the parent package to parameters in the child package. The execute package task in the parent package, shown above, has the variable vThisClient from the parent package mapped to the pClientID parameter shown earlier in the child package.  Note that there is no value mapped to the child package parameter named pOutputFolder.  Since this parameter has the Required property set to False, we don’t have to specify a value for it, which will cause that parameter to use the default value we supplied when designing the child pacakge. The last step in the parent package is to create the for each loop container I mentioned earlier, and place the execute package task inside it.  I’m using an object variable to store the distinct client ID values, and I use that as the iterator for the loop (I describe how to do this more in depth here).  For each iteration of the loop, a different client ID value will be passed into the child package parameter. The final step is to configure the child package to actually do something meaningful with the parameter values passed into it.  In this case, I’ve modified the OleDB source query to use the pClientID value in the WHERE clause of the query to restrict results for each iteration to a single client’s data.  Additionally, I’ll use both the pClientID and pOutputFolder parameters to dynamically build the output filename. As shown, the pClientID is used in the WHERE clause, so we only get the current client’s invoices for each iteration of the loop. For the flat file connection, I’m setting the Connection String property using an expression that engages both of the parameters for this package, as shown above. Parting Thoughts There are many uses for package parameters beyond a simple parent-child design pattern.  For example, you can create standalone packages (those not intended to be used as a child package) and still use parameters.  Parameter values may be supplied to a package directly at runtime by a SQL Server Agent job, through the command line (via dtexec.exe), or through T-SQL. Also, you can also have project parameters as well as package parameters.  Project parameters work in much the same way as package parameters, but the parameters apply to all packages in a project, not just a single package. Conclusion Of the numerous advantages of using catalog deployment model in SSIS 2012 and beyond, package parameters are near the top of the list.  Parameters allow you to easily share values from parent to child packages, enabling more dynamic behavior and better code encapsulation. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Preview and Purchase Ebooks with Kindle for PC

    - by Matthew Guay
    Want to look over a new book, or buy it immediately in ebook format?  Here’s how you can preview and purchase most new books from your PC the easy way. Most new books, including almost all New York Times Bestsellers, are available in ebook format from Amazon’s Kindle store.  The Kindle store also includes numerous free ebooks, including out-of-print classics and a surprising amount of recent books.  With the free Kindle for PC reader, you can read any of these ebooks without having to purchase a Kindle device. Preview Ebooks Before you Purchase Sometimes, it can be hard to know if you want to purchase a new book without reading some of it first.  With Kindle for PC, however, you can download a sample of any ebook available for free.  The sample usually includes the table of contents, forward or introduction, and often part or all of the first chapter. To get an ebook sample, find the book you want in the Kindle store (link below). Now, under the Try it free box, select the correct computer or device to send the sample to, and click Send Sample now. Amazon will thank you for your order, even though this is only a free preview.  Click the Go to Kindle for PC button to open Kindle and read your ebook preview.   Or, if Kindle is already running, press the Refresh button in the top right corner to check for new ebooks and previews. Kindle will synchronize and download the previews you selected. The most recently downloaded items show up on the top left.  All sample books have a red “Sample” bar on the bottom of their cover, and they also include links to Buy or view more info about it on it’s cover.  Double-click your sample to start reading it. Your ebook sample will usually open at the introduction or beginning of the first chapter, but you can also view the index, cover, and more. When you reach the end of the sample book, you can click a link to buy the book or view more details about it.  Strangely, both of these links currently take you to the ebook’s page on Amazon.com, but perhaps in the future the Buy link will directly let you purchase the book. Or, you can also click Buy Now on a sample book directly from your Kindle library. If you clicked one of these links, you will be returned to the ebook’s page on Amazon.  Choose the PC or Kindle you want the book delivered to, and this time, select Buy Now with 1-Click. Add your payment info if you’re not already setup for 1-Click Shopping, and then you’ll be shown the same Thank you page as before.  Refresh Kindle for PC, and your new ebook will automatically download.  Strangely, the sample ebook is not automatically removed, so you can right-click on the sample and select Delete this Book.  Additionally, your last-read page in the sample is not synced to the purchased book, so you may have to find your place again. Now, enjoy your full ebook! Download Free Books for Kindle The Kindle Store has an amazing amount of free ebooks.  Some free books may only be free for a limited time as a promotion, while others, such as old classics, may always be free.  Either which way, once you download it, you can keep it forever. When you find a free ebook you want, select the Kindle or PC you want to download it to and click “Buy now with 1-Click”.  Notice that this book shows it’s price is $0.00, but the button still says Buy now.  Rest assured, if the book’s price show up as $0.00, you will not be charged anything for downloading it. Your ebook will download as usual after your next refresh.  Note that you can still download the sample first if you want, but since the book is free, just download the whole thing and delete it if you don’t want it. Redownload your Purchased or Free Books If you install Kindle on a new PC or delete a book from your library, you can always re-download it from your Amazon account.  Browse to the Manage your Kindle page on Amazon (link below) sign in with your Amazon account, and scroll down to the list of your purchased content. Select the book you wish to download, then choose the Kindle or PC you want to download it to and press Go. Note: There is a “Delete this title” button right below this.  If you press the Delete button, you will not ever be able to re-download it. Or, you can download the book directly from the Archived Items tab in Kindle on your other PC. And, if you have your Kindle content on multiple computers, your reading will be synced via Whispersync.  You can start reading on your desktop, and then resume where you left off from your laptop. Conclusion With these tips and tricks, it is much easier to preview and purchase new books, find and download free ebooks, and re-download any you’ve deleted from your PC.  Have fun filling up your digital library! Links Manage your Kindle account Similar Articles Productive Geek Tips Read Mobi eBooks on Kindle for PCRead Kindle Books On Your Computer with Kindle for PCHow to See Where a TinyUrl Is Really Linking ToEdit Microsoft Word 2007 Documents in Print PreviewWhy Can’t I Turn the Details/Preview Panes On or Off in Windows Vista Explorer? TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Whoa ! Use Printflush to Solve Printing Problems Icelandic Volcano Webcams Open Multiple Links At One Go NachoFoto Searches Images in Real-time Office 2010 Product Guides

    Read the article

  • Team Leaders & Authors - Manage and Report Workflow using "Print an Outline" in UPK

    - by [email protected]
    Did you know you can "print an outline?" You can print any outline or portion of an outline. Why might you want to "print an outline" in UPK... Have you ever wondered how many topics you have recorded, how many of your topics are ready for review, or even better, how many topics are complete! Do you need to report your project status to management? Maybe you just like to have a copy of your outline to refer to during development. Included in this output is the outline structure as well as the layout defined in the Details View of the Outline Editor. To print an outline, you must open either a module or section in the Outline Editor. A set of default data columns is automatically included in the output; however, you can configure which columns you want to appear in the report by switching to the Details view and customizing the columns. (To learn more about customizing your columns refer to the Add and Remove Columns section of the Content Development.pdf guide) To print an outline from the Outline Editor: 1. Open a module or section document in the Outline Editor. 2. Expand the documents to display the details that you want included in the report. 3. On the File menu, choose Print and use the toolbar icons to print, view, or save the report to a file. Personally, I opt to save my outline in Microsoft Excel. Using the delivered features of Microsoft Excel you can add columns of information, such as development notes, to your outline or you can graph and chart your Project status. As mentioned above you can configure what columns you want to appear in the outline. When utilizing the Print an Outline feature in conjunction with the Managing Workflow features of the UPK Multi-user instance you as a Team Lead or Author can better report project status. Read more about Managing Workflow below. Managing Workflow: The Properties toolpane contains special properties that allow authors to track document status or State as well as assign Document Ownership. Assign Content State The State property is an editable property for communicating the status of a document. This is particularly helpful when collaborating with other authors in a development team. Authors can assign a state to documents from the master list defined by the administrator. The default list of States includes (blank), Not Started, Draft, In Review, and Final. Administrators can customize the list by adding, deleting or renaming the values. To assign a State value to a document: 1. Make sure you are working online. 2. Display the Properties toolpane. 3. Select the document(s) to which you want to assign a state. Note: You can select multiple documents using the standard Windows selection keys (CTRL+click and SHIFT+click). 4. In the Workflow category, click in the State cell. 5. Select a value from the list. Assign Document Ownership In many enterprises, multiple authors often work together developing content in a team environment. Team leaders typically handle large projects by assigning specific development responsibilities to authors. The Owner property allows team leaders and authors to assign documents to themselves and other authors to track who is responsible for a specific document. You view and change document assignments for a document using the Owner property in the Properties toolpane. To assign a document owner: 1. Make sure you are working online. 2. On the View menu, choose Properties. 3. Select the document(s) to which you want to assign document responsibility. Note: You can select multiple documents using the standard Windows selection keys (CTRL+click and SHIFT+click). 4. In the Workflow category, click in the Owner cell. 5. Select a name from the list. Is anyone out there already using this feature? Share your ideas with the group. Those of you new to this feature, give it a test drive and let us know what you think. - Kathryn Lustenberger, Oracle UPK & Tutor Outbound Product Management

    Read the article

  • A Graduate&rsquo;s Journey at Oracle &ndash; Bhaskar Ghosh From Oracle India

    - by david.talamelli
    I am Bhaskar Ghosh, and I work as an Applications Engineer with Oracle. Well, it was three years ago when my journey with one of the largest software companies started. It was a fine day and a decisive moment, when I was placed in Oracle as a campus recruit from College of Engineering Guindy, Anna University, Chennai! I always thought of looking back, the time that helped me learn beyond my boundaries, think broader and ahead, and grow – technically, professionally and personally. Hmmn! Let me recall the eventful moments once again. My first day as an intern at Oracle started in late 2007. I met one of the Oracle Managers at the Oracle Campus in Hyderabad and on the same day I also met another Oracle employee who was to later to become my first manager. I was charged and thrilled with the environment and the wonderful people around me! I was joined by two other interns, who also had a Masters in Computer Applications. We formed a very friendly group with all the interns and the new hires, and shared our excitement and learning. Myself and one of the other Graduates started working on a very interesting project on Semantic technology. We finally had our names added as co-developers for this very project. This phase of five months was the time and we learnt tremendously and worked very hard, partly because we had to travel back and forth to our colleges to submit reports and present for the Masters in Computer Applications final year project reviews. After completing my MCA, I joined as a full-time employee in 2008. During the next year, we worked on interesting and bleeding edge technologies - OWL, RDF, SPARQL, Visualization, J2EE, Social Web features, Semantic Web technologies, Web Services and many more! We developed cool, rich internet and desktop applications. Little did I know at that time, that this learning would help me tremendously for my the next project in Oracle. The following year saw me being assigned a role in a different project that my other team members were working on for the last two years. It took me two months to understand and get into a flow with this new task. I was fortunate that this phase helped me enhance my inter-personal and communication skills, as much as it helped me grow professionally with better ability to tackle multiple priorities and switch between tasks based on the team’s requirements. I was made the POC for all communications with our team and other product teams. I personally feel that this time enhanced me tremendously in technologies like Oracle Forms, J2EE, and Java and Web Services. The last six months, saw myself becoming an Institute of Electrical and Electronics Engineer member, and continuing my higher education International Institute of Information Technology, Hyderabad. Oracle supports its employees becoming members of professional bodies, and higher studies are supported by management, I think it is tremendously helpful in the professional and technical growth of the employees. Last three months, I have been working on great and useful enhancements to our product. Ah beautiful! All these years, there have been other moments and events of fun that are too worth mentioning. Clubs and groups at Oracle such as Employee Club, Oracle Volunteers, Football Club, etc. have always kept on organizing numerous events and competitions, full of fun and entertainment. I really enjoyed participating, even if it was small, in the intra-Oracle football tourney, Oracle Volunteer Days, OraFora, OraOvations, and a few more. Those ‘Seasons of Sharing’, those ‘Blood Donation camps’, those ‘Diwali and Christmas gifts and events’, those ‘fun events at the annual function called OraOvations’, those ‘books and cycle stalls’, and those so many other things… It only fills my mind with pleasure. The last three years have been very eventful:they have been full of learning and growth, and under the very able and encouraging guidance of my manager. I have got the opportunity to know about and/or interact with many wonderful personalities, and learn from them, here at Oracle. The environment, the people, and the fellow developers have been so friendly, and always ever ready to help, when we were in doubt.. I really love the big office space, and the flexible timings, and the caring people around. I look forward to a beautiful, learning and motivating journey with Oracle.

    Read the article

  • JavaOne pictures and Community Commentary on JCP Awards

    - by heathervc
    We posted some pictures from JCP related events at JavaOne 2012 on the JCP Facebook page today.  The 2012 JCP Program Award winners and some of the nominees responded to the community recognition of their achievements during some of the JCP events last week.     “Our job on the EC is to balance the need of innovation – so we don’t standardize too early, or too late. We try to find that sweet spot that makes innovation and standardization work together, and not against each other.”- Ben Evans, CEO of jClarity and Executive Committee (EC) representative of the London Java Community, 2012 JCP Member/Participant of the Year Winner“SouJava has been evangelizing the Java platform, promoting the Java ecosystem in Brazil, and contributing to JSRs for several years. It’s very gratifying to have our work recognized, on behalf of many developers and Java User Groups around the world. This really is the work of a large group of people, represented by the few that can be here tonight.”- Michael Santos, representative of SouJava, 2012 JCP Member/Participant of the Year Winner "In the last years Credit Suisse has contributed to the development of Java EE specifications through participation in many customer advisory boards, through statements of requirements for extensions to the core Java related products in use, and active participation in JSRs. Winning the JCP Outstanding Spec Lead Award 2012 is very encouraging for our engagement and also demonstrates the level of expertise and commitment to drive the evolution of Java. Victor Grazi is happy and honored to receive this award." - Susanne Cech Previtali, Executive Committee (EC) representative of Credit Suisse, accepting award for 2012 JCP Outstanding Spec Lead Winner "Managing a JSR is difficult. There are so many decisions to be made and so many good and varied opinions, you never really know if you have decided correctly. The key to success is transparency and collaboration. I am truly humbled by receiving this award, there are so many other active JSRs.” Victor added that going forward in the JCP EC, they would like to simplify and open the process of participation – being addressed in the JCP.Next initiative of the JCP EC. "We would also like to encourage the engagement of universities, professors and students – as an important part of the Java community. While innovation is the lifeblood of our community and industry, without strong standards and compatibility requirements, we all end up in a maze of technology where everything is slightly different and doesn’t quite work with everything else." Victo Grazi, Executive Committee (EC) representative of Credit Suisse, 2012 JCP Outstanding Spec Lead Winner“I am very pleased, of course, to accept this award, but the credit really should go to all of those who have participated in the work of the JCP, while pushing for changes in the way it operates.  JCP.Next represents three JSRs. The first two are done, but the final step, JSR 358, is the complicated one, and it will bring in the lawyers. Just to give you an idea of what we’re dealing with, it affects licensing, intellectual property, patents, implementations not based on the Reference Implementation (RI), the role of the RI, compatibility policy, possible changes to the Technical Compatibility Kit (TCK), transparency, where do individuals fit in, open source, and more.”- Patrick Curran, JCP Chair, Spec Lead on JCP.Next JSRs (JSR 348, JSR 355 and JSR 358), 2012 JCP Most Significant JSR Winner“I’m especially glad to see the JCP community recognize JCP.Next for its importance. The governance work it represents is KEY to moving the Java platform forward and the success of the technology.”- John Rizzo, Executive Committee (EC) representative of Aplix Corporation, JSR Expert Group Member “I am deeply honored to be nominated. I had the privilege to receive two awards on behalf of Expert Groups and Spec Leads two years ago. But this time, I am nominated personally, which values my own contribution to the JCP, and of course, participation in JSRs and the EC work. I’m a fan of Agile Principles and Values Working. Being an Agile Coach and Consultant, I use it for some of the biggest EC Member companies and projects. It fuels my ability to help the JCP become more agile, lean and transparent as part of the JCP.Next effort.” - Werner Keil, Individual Executive Committee (EC) Member, a 2012 JCP Member/Participant of the Year Nominee, JSR Expert Group Member“The JCP ever has been some kind of institution for me,” Markus said. “If in technical doubt, I go there, look for the specifications of the implementation I work with at the moment and verify what I had observed. Since the beginning of my Java journey more than 12 years back now, I always had a strong relationship with the JCP. Shaping the future of a technology by joining the JCP – giving feedback and contributing to the road ahead through individual JSRs – that brings you to a whole new level.”Calling himself, “the new kid on the block,” he explained that for years he was afraid to join the JCP and contribute. But in reality, “Every single one of the big names I meet from the different Expert Groups is a nice person. People you can actually work with,” he says. “And nobody blames you for things you don't know. As long as you are committed and bring what is worth the most: passion, experiences and the desire to make a difference.” - Markus Eisele, a 2012 JCP Member of the Year Nominee, JSR Expert Group MemberCongratulations again to all of the nominees and winners of the JCP Program Awards.  Next year, we will add another award for the group of JUG members (not an entire JUG) that makes the best contribution to the Adopt-a-JSR program.  Let us know if you have other suggestions or improvements.

    Read the article

  • Prevent Changing the Screen Saver and Wallpaper in Windows 7

    - by Mysticgeek
    Sometimes you might not want users to have the ability to change Screen Savers and Wallpaper on Windows 7 workstations. Today we look at how to prevent them from changing either one or both. You might administer computers in your home or small office and find it annoying when users continuously change the wallpaper and Screen Savers to something obnoxious. A lot of times they might be inexperienced users and download these so-called “wonderful and free” Screen Saver/Wallpaper packages from shady sites that include loads of Spyware. Preventing users from changing them is another helpful tool to avoid wasteful time spent switching things back. Prevent Changing Screensavers & Wallpaper Using Group Policy Editor  Note: This method uses Group Policy which is not available in Home versions on Windows 7. Open the Start Menu and enter gpedit.msc into the Search box and hit Enter. When Local Group Policy Editor opens, navigate to User Configuration \ Administrative Templates \ Control Panel \ Personalization. Then in the right column double-click on Prevent changing desktop background. Now check the radio button next to Enabled, then click OK. Back on the Group Policy Screen, double-click on Prevent changing screen saver. In the next screen select the radio button next to Enable, click OK, then close out of Group Policy Editor. Now when a user goes into the Personalization section, the Desktop Background hyperlink is now grayed out and inactive. Notice the message One or more of the settings on this page has been disabled by the system administrator at the bottom of the section. If they click to change the Screen Saver, an error message will pop up letting them know the function is disabled. Prevent Changing Screensavers & Wallpaper Using a Registry Hack You can also make a couple Registry changes to prevent users from changing the Wallpaper & Screen Saver…which will work on Home versions of Windows 7. Before making any Registry changes make sure you back it up first. Open the Registry by typing regedit into the Search box in the Start menu and hit Enter. First we’ll start with the Wallpaper. Navigate to HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Policies\System and create a new String Value and name it Wallpaper. Then modify the Value data to point to the location of the Wallpaper you want it to always be. Where in this example it’s our main wallpaper on our local drive…then click OK. Now let’s make sure they can’t change the Screen Saver. In the same Registry location, we need to make a new DWORD (32-bit) Value. Give it the Value name of NoDispScrSavPage and the value data of “1” and click OK. Close out of the Registry and restart the machine or simply log off then back on again for the changes to take effect. Results For the Wallpapers, a user can still go in and see the selections, however if they try to change it to something else… It will just go back to the Personalization screen and no changes will be made, as we set the value to only be the background we specified. If the user tries to make a change to the Screen Saver, the hyperlink will be grayed out and inactive, and the message One or more of the settings on this page has been disabled by the system administrator will be displayed at the bottom of the section. Conclusion If you’re tired of users changing the Wallpaper and Screen Saver, and want another way to help avoid Malware, locking down these settings can help a lot. Again, before making any changes to the Registry, make sure to back it up. These settings should work in Vista and XP as well. Similar Articles Productive Geek Tips Save 1-4% More Battery Life With Windows Vista Battery SaverCustomize Your Windows Vista Logon ScreenEnable "Ubuntu Style" Logons in Windows VistaManage the Delete Confirmation Dialog box in Windows 7Dual Monitors: Use a Different Wallpaper on Each Desktop TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott

    Read the article

  • Creating a Reverse Proxy with URL Rewrite for IIS

    - by OWScott
    There are times when you need to reverse proxy through a server. The most common example is when you have an internal web server that isn’t exposed to the internet, and you have a public web server accessible to the internet. If you want to serve up traffic from the internal web server, you can do this through the public web server by creating a tunnel (aka reverse proxy). Essentially, you can front the internal web server with a friendly URL, even hiding custom ports. For example, consider an internal web server with a URL of http://10.10.0.50:8111. You can make that available through a public URL like http://tools.mysite.com/ as seen in the following image. The URL can be made public or it can be used for your internal staff and have it password protected and/or locked down by IP address. This is easy to do with URL Rewrite and IIS. You will also need Application Request Routing (ARR) installed even though for a simple reverse proxy you won’t use most of ARR’s functionality. If you don’t already have URL Rewrite and ARR installed you can do so easily with the Web Platform Installer. A lot can be said about reverse proxies and many different situations and ways to route the traffic and handle different URL patterns. However, my goal here is to get you up and going in the easiest way possible. Then you can dig in deeper after you get the base configuration in place. URL Rewrite makes a reverse proxy very easy to set up. Note that the URL Rewrite Add Rules template doesn’t include Reverse Proxy at the server level. That’s not to say that you can’t create a server-level reverse proxy, but the URL Rewrite rules template doesn’t help you with that. Getting Started First you must create a website on your public web server that has the public bindings that you need. Alternately, you can use an existing site and route using conditions for certain traffic. After you’ve created your site then open up URL Rewrite at the site level. Using the “Add Rule(s)…” template that is opened from the right-hand actions pane, create a new Reverse Proxy rule. If you receive a prompt (the first time) that the proxy functionality needs to be enabled, select OK. This is telling you that a proxy can route traffic outside of your web server, which happens to be our goal in this case. Be aware that reverse proxy rules can be dangerous if you open sites from inside you network to the world, so just be aware of what you’re doing and why. The next and final step of the template asks a few questions. The first textbox asks the name of the internal web server. In our example, it’s 10.10.0.50:8111. This can be any URL, including a subfolder like internal.mysite.com/blog. Don’t include the http or https here. The template assumes that it’s not entered. You can choose whether to perform SSL Offloading or not. If you leave this checked then all requests to the internal server will be over HTTP regardless of the original web request. This can help with performance and SSL bindings if all requests are within a trusted network. If the network path between the two web servers is not completely trusted and safe then uncheck this. Next, the template enables you to create an outbound rule. This is used to rewrite links in the page to look like your public domain name rather than the internal domain name. Outbound rules have a lot of CPU overhead because the entire web content needs to be parsed and updated. However, if you need it, then it’s well worth the extra CPU hit on the web server. If you check the “Rewrite the domain names of the links in HTTP responses” checkbox then the From textbox will be filled in with what you entered for the inbound rule. You can enter your friendly public URL for the outbound rule. This will essentially replace any reference to 10.10.0.50:8111 (or whatever you enter) with tools.mysite.com in all <a>, <form>, and <img> tags on your site. That’s it! Well, there is a lot more that you can do, this but will give you the base configuration. You can now visit www.mysite.com on your public web server and it will serve up the site from your internal web server. You should see two rules show up; one inbound and one outbound. You can edit these, add conditions, and tweak them further as needed. One common issue that can occur without outbound rules has to do with compression. If you run into errors with the new proxied site, try turning off compression to confirm if that’s the issue. Here’s a link with details on how to deal with compression and outbound rules. I hope this was helpful to get started and to see how easy it is to create a simple reverse proxy using URL Rewrite for IIS.

    Read the article

  • doubleTwist is an iTunes Alternative that Supports Several Devices

    - by Mysticgeek
    There are a lot of iTunes users out there, but unfortunately you can’t use it with all of your portable devices. Today we take a look at doubleTwist, which allows you to sync your media with a multitude of portable devices and easily share it as well. Note: You can run doubleTwist on Windows or Mac, and here we take a look at the Windows version. Install & Setup doubleTwist Download and install doubleTwist using the defaults in the wizard… Installation takes several moments and you’ll see the progress while it finishes up. After installation is complete, sign up for an account if you don’t already have one. If you do have an account you can login right away. Enter in your username, email address, and password then click Sign Up.   You’ll get an confirmation email and need to activate the account before you can sign in. Once you’re all signed up, launch doubleTwist and you’ll be ready to start using it. doubleTwist Music The default music store is Amazon MP3 store which might appeal to those of you who are tired of the iTunes music store. A lot of times the music is cheaper and available at higher bit rates. You can start searching for music in the Amazon Music Store and previewing songs. To purchase anything though you will need to sign into your Amazon account.   Under Playlists it allows you to import your playlists from iTunes and Windows Media Player, which is a handy feature if you don’t want to set them up again. Of course you can play your songs through the music player on your desktop. Devices One of the coolest things about doubleTwist is that it supports a lot of different portable media devices including iPod, BlackBerry, Windows Mobile, Android, PSP, Smartphones, and much more. Unfortunately for Zune users…there isn’t any support for the Zune of Zune HD yet. Here we have a Creative Zen attached and can sync songs, pictures, and podcasts. An HTC-S620 Smartphone running Windows Mobile… Even a simple USB drive will be recognized and you can transfer your media to it as well.   Podcasts Finding your favorite audio and video podcasts is easy with the search feature. You can easily manage and subscribe to podcasts in the subscriptions section.   You can watch the video podcasts directly in doubleTwist. Sharing Media Also you can share digital media with your friends or add it to Flickr and YouTube. You can send any pictures, videos, or music in your library to other people by dragging it over. You can email users individually… Or access contacts from your Gmail and Yahoo accounts. There is a limit to how much you can send of video podcasts… only the first 10 minutes. The person you send it to will get a link in their email that points to your My Feed page on the doubleTwist site.   There they can access the media you sent…in this example it’s a video podcast but you can share any media. Other Features Under My Profile you can change your avatar and personal information.   In Preferences you can choose where media is stored, its startup actions, podcast subscriptions, and manage device syncing. Conclusion It’s still in beta stage so expect some bugs, but overall doubleTwist is a solid media player that is easy to use with a clean interface. It’s simple and doesn’t try to do too much so is fairly easy on system resources. The main annoyance is it tries to catalog all of your media out of the box. Which may be alright for some users with smaller media collections, but very irritating to advanced users with large collections. Also there is currently no support for the Zune, but according to their forums, it’s on the way. At the time of this writing it’s in public beta and can be downloaded for XP, Vista, Windows 7 (32 & 64 bit), and Mac OSX. If you’re looking for an iTunes alternative that works with several different portable devices, you might want to give DoubleTwist a try. Download DoubleTwist Public Beta See If Your Media Device is Supported by doubleTwist Similar Articles Productive Geek Tips MusicBee is a Fast and Powerful Music ManagerAvoid the Apple QuickTime Bloat with QT LiteBeginner Geek: Set Default Programs in Windows 7 and VistaBeginner Geeks: OpenOffice is a Free Cross Platform Alternative to MS OfficeManage Devices the Easy Way with Device Stage in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Play Music in Chrome by Simply Dragging a File 15 Great Illustrations by Chow Hon Lam Easily Sync Files & Folders with Friends & Family Amazon Free Kindle for PC Download Stretch popurls.com with a Stylish Script (Firefox) OldTvShows.org – Find episodes of Hitchcock, Soaps, Game Shows and more

    Read the article

  • My Thoughts On the Xbox 180

    - by Chris Gardner
    Originally posted on: http://geekswithblogs.net/freestylecoding/archive/2013/06/21/my-thoughts-on-the-xbox-180.aspx Everyone seems to be putting their 0.00237 cents into the wishing well over Microsoft's recent decision to reverse the DRM policy on the Xbox One. However, there have been a few issues that nobody has touched. As such, I have decided to dig 0.00237 cents out of my pocket. First, let me be clear about this point. I do not support the decision to reverse the DRM policy on the Xbox One. I wanted that point to be expressed first and unambiguously. I will say it again. I do not support the decision to reverse the DRM policy on the Xbox One. Now that I have that out of the way, let me go into my rationale. This decision removes most of the cool features that enticed me to pre-order the console. No, I didn't cancel my pre-order. There is still five months before the release of the console, and there is still a plethora of information that we, as consumers, do not have. With that, it should be noted that much of the talk in this post is speculation and rhetoric. I do not have any insider information that you do not possess. The persistent connection would have allowed the console to do many of the functions for which we have been begging. That demo where someone was playing Ryse, seamlessly accepted a multiplayer challenge in Killer Instinct, played the match (and a rematch,) and then jumped back into Ryse. That's gone, if you bought the game on disc. The new, DRM free system will require the disc in the system to play a game. That bullet point where one Xbox Live account could have up to 10 slave accounts so families could play together, no matter where they were located. That's gone as well. The promise of huge, expansive, dynamically changing worlds that was brought to us with the power of cloud computing. Well, "the people" didn't want there to be a forced, persistent connection. As such, developers can't rely on a connection and, as such, that feature is gone. This is akin to the removal of the hard drive on the Xbox 360. The list continues, but the enthusiast press has enumerated the list far better than I wish. All of this is because the Xbox team saw the HUGE success of Steam and decided to borrow a few ideas. Yes, Steam. The service that everyone hated for the first six months (for the same reasons the Xbox One is getting flack.) There was an initial growing pain. However, it is now lauded as the way games distribution should be handled. Unless you are Microsoft. I do find it curious that many of the features were originally announced for the PS4 during its unveiling. However, much of that was left strangely absent for Sony's E3 press conference. Instead, we received a single, static slide that basically said the exact opposite of Microsoft's plans. It is not farfetched to believe that slide came into existence during the approximately seven hours between the two media briefings. The thing that majorly annoys me over this whole kerfuffle is that the single thing that caused the call to arms is, really, not an issue. Microsoft never said they were going to block used sales. They said it was up to the publisher to make that decision. This would have allowed publishers to reclaim some of the costs of development in subsequent sales of the product. If you sell your game to GameStop for 7 USD, GameStop is going to sell it for 55 USD. That is 48 USD pure profit for them. Some publishers asked GameStop for a small cut. Was this a huge, money grubbing scheme? Well, yes, but the idea was that they have to handle server infrastructure for dormant accounts, etc. Of course, GameStop flatly refused, and the Online Pass was born. Fortunately, this trend didn’t last, and most publishers have stopped the practice. The ability to sell "licenses" has already begun to be challenged. Are you living in the EU? If so, companies must allow you to sell digital property. With this precedent in place, it's only a matter of time before other areas follow suit. If GameStop were smart, they should have immediately contacted every publisher out there to get the rights to become a clearing house for these licenses. Then, they keep their business model and could reduce their brick and mortar footprint. The digital landscape is changing. We need to not block this process. As Seth MacFarlane best said "Some issues are so important that you should drag people kicking and screaming." I believe this was said on an episode of Real Time with Bill Maher about the issue of Gay Marriages. Much like the original source, this is an issue that we need to drag people to the correct, progressive position. Microsoft, as a company, actually has the resources to weather the transition period. They have a great pool of first and second party developers that can leverage this new framework to prove the validity. Over time, the third party developers will get excited to use these tools. As an old C++ guy, I resisted C# for years. Now, I think it's one of the best languages I've ever used. I have a server room and a Co-Lo full of servers, so I originally didn't see the value in Azure. Now, I wish I could move every one of my projects into the cloud. I still LOVE getting physical packaging, which my music and games collection will proudly attest. However, I have started to see the value in pure digital, and have found ways to integrate this into the ways I consume those products. I can, honestly, understand how some parts of the population would be very apprehensive about this new landscape. There were valid arguments about people with no internet access. There are ways to combat these problems. These methods do not require us to throw the baby out with the bathwater. However, the number of people in the computer industry that I have seen cry foul is truly appalling. We are the forward looking people that help show how technology can improve people's lives. If we can't see the value of the brief pain involved with an exciting new ecosystem, than who will?

    Read the article

  • Exploring the Excel Services REST API

    - by jamiet
    Over the last few years Analysis Services guru Chris Webb and I have been on something of a crusade to enable better access to data that is locked up in countless Excel workbooks that litter the hard drives of enterprise PCs. The most prominent manifestation of that crusade up to now has been a forum thread that Chris began on Microsoft Answers entitled Excel Web App API? Chris began that thread with: I was wondering whether there was an API for the Excel Web App? Specifically, I was wondering if it was possible (or if it will be possible in the future) to expose data in a spreadsheet in the Excel Web App as an OData feed, in the way that it is possible with Excel Services? Up to recently the last 10 words of that paragraph "in the way that it is possible with Excel Services" had completely washed over me however a comment on my recent blog post Thoughts on ExcelMashup.com (and a rant) by Josh Booker in which Josh said: Excel Services is a service application built for sharepoint 2010 which exposes a REST API for excel documents. We're looking forward to pros like you giving it a try now that Office365 makes sharepoint more easily accessible.  Can't wait for your future blog about using REST API to load data from Excel on Offce 365 in SSIS. made me think that perhaps the Excel Services REST API is something I should be looking into and indeed that is what I have been doing over the past few days. And you know what? I'm rather impressed with some of what Excel Services' REST API has to offer. Unfortunately Excel Services' REST API also has one debilitating aspect that renders this blog post much less useful than it otherwise would be; namely that it is not publicly available from the Excel Web App on SkyDrive. Therefore all I can do in this blog post is show you screenshots of what the REST API provides in Sharepoint rather than linking you directly to those REST resources; that's a great shame because one of the benefits of a REST API is that it is easily and ubiquitously demonstrable from a web browser. Instead I am hosting a workbook on Sharepoint in Office 365 because that does include Excel Services' REST API but, again, all I can do is show you screenshots. N.B. If anyone out there knows how to make Office-365-hosted spreadsheets publicly-accessible (i.e. without requiring a username/password) please do let me know (because knowing which forum on which to ask the question is an exercise in futility). In order to demonstrate Excel Services' REST API I needed some decent data and for that I used the World Tourism Organization Statistics Database and Yearbook - United Nations World Tourism Organization dataset hosted on Azure Datamarket (its free, by the way); this dataset "provides comprehensive information on international tourism worldwide and offers a selection of the latest available statistics on international tourist arrivals, tourism receipts and expenditure" and you can explore the data for yourself here. If you want to play along at home by viewing the data as it exists in Excel then it can be viewed here. Let's dive in.   The root of Excel Services' REST API is the model resource which resides at: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model Note that this is true for every workbook hosted in a Sharepoint document library - each Excel workbook is a RESTful resource. (Update: Mark Stacey on Twitter tells me that "It's turned off by default in onpremise Sharepoint (1 tickbox to turn on though)". Thanks Mark!) The data is provided as an ATOM feed but I have Firefox's feed reading ability turned on so you don't see the underlying XML goo. As you can see there are four top level resources, Ranges, Charts, Tables and PivotTables; exploring one of those resources is where things get interesting. Let's take a look at the Tables Resource: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Tables Our workbook contains only one table, called ‘Table1’ (to reiterate, you can explore this table yourself here). Viewing that table via the REST API is pretty easy, we simply append the name of the table onto our previous URI: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Tables('Table1') As you can see, that quite simply gives us a representation of the data in that table. What you cannot see from this screenshot is that this is pure HTML that is being served up; that is all well and good but actually we can do more interesting things. If we specify that the data should be returned not as HTML but as: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Tables('Table1')?$format=image then that data comes back as a pure image and can be used in any web page where you would ordinarily use images. This is the thing that I really like about Excel Services’ REST API – we can embed an image in any web page but instead of being a copy of the data, that image is actually live – if the underlying data in the workbook were to change then hitting refresh will show a new image. Pretty cool, no? The same is true of any Charts or Pivot Tables in your workbook - those can be embedded as images too and if the underlying data changes, boom, the image in your web page changes too. There is a lot of data in the workbook so the image returned by that previous URI is too large to show here so instead let’s take a look at a different resource, this time a range: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Ranges('Data!A1|C15') That URI returns cells A1 to C15 from a worksheet called “Data”: And if we ask for that as an image again: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Ranges('Data!A1|C15')?$format=image Were this image resource not behind a username/password then this would be a live image of the data in the workbook as opposed to one that I had to copy and upload elsewhere. Nonetheless I hope this little wrinkle doesn't detract from the inate value of what I am trying to articulate here; that an existing image in a web page can be changed on-the-fly simply by inserting some data into an Excel workbook. I for one think that that is very cool indeed! I think that's enough in the way of demo for now as this shows what is possible using Excel Services' REST API. Of course, not all features work quite how I would like and here is a bulleted list of some of my more negative feedback: The URIs are pig-ugly. Are "_vti_bin" & "ExcelRest.aspx" really necessary as part of the URI? Would this not be better: http://server/Documents/TourismExpenditureInMillionsOfUSD.xlsx/Model/Tables(‘Table1’) That URI provides the necessary addressability and is a lot easier to remember. Discoverability of these resources is not easy, we essentially have to handcrank a URI ourselves. Take the example of embedding a chart into a blog post - would it not be better if I could browse first through the document library to an Excel workbook and THEN through the workbook to the chart/range/table that I am interested in? Call it a wizard if you like. That would be really cool and would, I am sure, promote this feature and cut down on the copy-and-paste disease that the REST API is meant to alleviate. The resources that I demonstrated can be returned as feeds as well as images or HTML simply by changing the format parameter to ?$format=atom however for some inexplicable reason they don't return OData and no-one on the Excel Services team can tell me why (believe me, I have asked). $format is an OData parameter however other useful parameters such as $top and $filter are not supported. It would be nice if they were. Although I haven't demonstrated it here Excel Services' REST API does provide a makeshift way of altering the data by changing the value of specific cells however what it does not allow you to do is add new data into the workbook. Google Docs allows this and was one of the motivating factors for Chris Webb's forum post that I linked to above. None of this works for Excel workbooks hosted on SkyDrive This blog post is as long as it needs to be for a short introduction so I'll stop now. If you want to know more than I recommend checking out a few links: Excel Services REST API documentation on MSDNSo what does REST on Excel Services look like??? by Shahar PrishExcel Services in SharePoint 2010 REST API Syntax by Christian Stich. Any thoughts? Let's hear them in the comments section below! @Jamiet 

    Read the article

  • Convert a PDF eBook to ePub Format

    - by Matthew Guay
    Would you like to read a PDF eBook on an eReader or mobile device, but aren’t happy with the performance? Here’s how you can convert your PDFs to the popular ePub format so you can easily read them on any device. PDFs are a popular format for eBooks since they render the same on any device and can preserve the exact layout of the print book.  However, this benefit is their major disadvantage on mobile devices, as you often have to zoom and pan back and forth to see everything on the page.  ePub files, on the other hand, are an increasingly popular option. They can reflow to fill your screen instead of sticking to a strict layout style.  With the free Calibre program, you can quickly convert your PDF eBooks to ePub format. Getting Started Download the Calibre installer (link below) for your operating system, and install as normal.  Calibre works on recent versions of Windows, OS X, and Linux.  The Calibre installer is very streamlined, so the install process was quite quick. Calibre is a great application for organizing your eBooks.  It can automatically sort your books by their metadata, and even display their covers in a Coverflow-style viewer. To add an eBook to your library, simply drag-and-drop the file into the Calibre window, or click Add books at the top.  Here you can choose to add all the books from a folder and more. Calibre will then add the book(s) to your library, import the associated metadata, and organize them in the catalog. Convert your Books Once you’ve imported your books into Calibre, it’s time to convert them to the format you want.  Select the book or books you want to convert, and click Convert E-books.  Select whether you want to convert them individually or bulk convert them. The convertor window has lots of options, so you can get your ePub book exactly like you want.  You can simply click Ok and go with the defaults, or you can tweak the settings. Do note that the conversion will only work successfully with PDFs that contain actual text.  Some PDFs are actually images scanned in from the original books; these will appear just like the PDF after the conversion, and won’t be any easier to read. On the first tab, you’ll notice that Calibri will repopulate most of the metadata fields with info from your PDF.  It will also use the first page of the PDF as the cover.  Edit any of the information that may be incorrect, and add any additional information you want associated with the book. If you want to convert your eBook to a different format other than ePub, Calibri’s got you covered, too.  On the top right, you can choose to output the converted eBook into a many different file formats, including the Kindle-friendly MOBI format. One other important settings page is the Structure Detection tab.  Here you can choose to have it remove headers and footers in the converted book, as well as automatically detect chapter breaks. Click Ok when you’ve finished choosing your settings and Calibre will convert the book.  This may take a few minutes, depending on the size of the PDF.  If the conversion seems to be taking too long, you can click Show job details for more information on the progress.   The conversion usually works good, but we did have one job freeze on us.  When we checked the job details, it indicated that the PDF was copy-protected.  Most PDF eBooks, however, worked fine. Now, back in the main Calibri window, select your book and save it to disk.  You can choose to save only the EPUB format, or you can select Save to disk to save all formats of the book to your computer. You can also view the ePub file directly in Calibri’s built-in eBook viewer.  This is the PDF book we converted, and it looks fairly good in the converted format.  It does have some odd line breaks and some misplaced numbers, but on the whole, the converted book is much easier to read, especially on small mobile devices.   Even images get included inline, so you shouldn’t be missing anything from the original eBook. Conclusion Calibri makes it simple to read your eBooks in any format you need. It is a project that is in constant development, and updates regularly adding better stability and features.  Whether you want to ready your PDF eBooks on a Sony Reader, Kindle, netbook or Smartphone, your books will now be more accessible than ever.  And with thousands of free PDF eBooks out there, you’ll be sure to always have something to read. If you’d like some Geeky PDF eBooks, Microsoft Press is offering a number of free PDF eBooks right now.  Check them out at this link (Account Required). Download the Calibre eBook program Similar Articles Productive Geek Tips Format a String as Currency in C#Convert Older Excel Documents to Excel 2007 FormatShare OneNote 2010 Notebooks with OneNote 2007Install an RPM Package on Ubuntu LinuxConvert PDF Files to Word Documents and Other Formats TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Nice Websites To Watch TV Shows Online 24 Million Sites Windows Media Player Glass Icons (icons we like) How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos

    Read the article

  • SQL SERVER – Planned and Unplanned Availablity Group Failovers – Notes from the Field #031

    - by Pinal Dave
    [Note from Pinal]: This is a new episode of Notes from the Fields series. AlwaysOn is a very complex subject and not everyone knows many things about this. The matter of the fact is there is very little information available on this subject online and not everyone knows everything about this. This is why when a very common question related to AlwaysOn comes, people get confused. In this episode of the Notes from the Field series database expert John Sterrett (Group Principal at Linchpin People) explains a very common issue DBAs and Developer faces in their career and is related to Planned and Unplanned Availablity Group Failovers. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of John in his own words. Whenever a disaster occurs it will be a stressful scenario regardless of how small or big the disaster is. This gets multiplied when it is your first time working with newer technology or the first time you are going through a disaster without a proper run book. Today, were going to help you establish a run book for creating a planned failover with availability groups. To make today’s session simple were going to have two instances of SQL Server 2012 included in an availability group and walk through the steps of doing an unplanned failover.  We will focus on using the user interface and T-SQL to complete the failovers. We are going to use a two replica Availability Group where each replica is in another location. Therefore, we will be covering Asynchronous (non automatic failover) the following is a breakdown of our availability group utilized today. Seeing the following screen might be scary the first time you come across an unplanned failover.  It looks like our test database used in this Availability Group is not functional and it currently isn’t. The database status is not synchronizing which makes sense because the primary replica went down so it couldn’t synchronize. With that said, we can still failover and make it functional while we troubleshoot why we lost our primary replica. To start we are going to right click on the availability group that needs to be restarted and select failover. This will bring up the following wizard, which will walk you through several steps needed to complete the failover using the graphical user interface provided with SQL Server Management Studio (SSMS). You are going to see warning messages simply because we are in Asynchronous commit mode and can not guarantee ‘no data loss’ when we do failover. Just incase you missed it; you get another screen warning you about potential data loss because we are in Asynchronous mode. Next we get to connect to the specific replica we want to become the primary replica after the failover occurs. In our case, we only have two replicas so this is trivial. In order to failover, it’s required to connect to the replica that will become primary.  The following screen shows that the connection has been made successfully. Next, you will see the final summary screen. Once again, this reminds you that the failover action will cause data loss as were using Asynchronous commit mode due to the distance between instances used for disaster recovery. Finally, once the failover is completed you will see the following screen. If you followed along this long you might be wondering what T-SQL scripts are generated for clicking through all the sections of the wizard. If you have used Database Mirroring in the past you might be surprised.  It’s not too different, which makes sense because the data is being replicated via SQL Server endpoints just like the good old database mirroring. Now were going to take a look at how to do a failover with just T-SQL. First, were going to need to open a new query window and run our query in SQLCMD mode. Just incase you haven’t used SQLCMD mode before we will show you how to enable it below. Now you can run the following statement. Notice, we connect to the replica we want to become primary after failover and specify to force failover to allow data loss. We can use the following script to failback over when our primary instance comes back online. -- YOU MUST EXECUTE THE FOLLOWING SCRIPT IN SQLCMD MODE. :Connect SQL2012PROD1 ALTER AVAILABILITY GROUP [AGSQL2] FORCE_FAILOVER_ALLOW_DATA_LOSS; GO Are your servers running at optimal speed or are you facing any SQL Server Performance Problems? If you want to get started with the help of experts read more over here: Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

< Previous Page | 621 622 623 624 625 626 627 628 629 630 631 632  | Next Page >