Search Results

Search found 15045 results on 602 pages for 'template engine'.

Page 556/602 | < Previous Page | 552 553 554 555 556 557 558 559 560 561 562 563  | Next Page >

  • How to display Sharepoint Data in a Windows Forms Application

    - by Michael M. Bangoy
    In this post I'm going to demonstrate how to retrieve Sharepoint data and display it on a Windows Forms Application. 1. Open Visual Studio 2010 and create a new Project. 2. In the project template select Windows Forms Application. 3. In order to communicate with Sharepoint from a Windows Forms Application we need to add the 2 Sharepoint Client DLL located in c:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\ISAPI. 4. Select the Microsoft.Sharepoint.Client.dll and Microsoft.Sharepoint.Client.Runtime.dll. (Your solution should look like the one below) 5. Open the Form1 in design view and from the Toolbox menu Add a Button, TextBox, Label and DataGridView on the form. 6. Next double click on the Load Button, this will open the code view of the form. Add Using statement to reference the Sharepoint Client Library then create two method for the Load Site Title and LoadList. See below:   using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Security; using System.Windows.Forms; using SP = Microsoft.SharePoint.Client;   namespace ClientObjectModel {     public partial class Form1 : Form     {         // url of the Sharepoint site         const string _context = "theurlofthesharepointsite";         public Form1()         {             InitializeComponent();         }         private void Form1_Load(object sender, EventArgs e)         {                    }         private void getsitetitle()         {             SP.ClientContext context = new SP.ClientContext(_context);             SP.Web _site = context.Web;             context.Load(_site);             context.ExecuteQuery();             txttitle.Text = _site.Title;             context.Dispose();         }                 private void loadlist()         {             using (SP.ClientContext _clientcontext = new SP.ClientContext(_context))             {                 SP.Web _web = _clientcontext.Web;                 SP.ListCollection _lists = _clientcontext.Web.Lists;                 _clientcontext.Load(_lists);                 _clientcontext.ExecuteQuery();                 DataTable dt = new DataTable();                 DataColumn column;                 DataRow row;                 column = new DataColumn();                 column.DataType = Type.GetType("System.String");                 column.ColumnName = "List Title";                 dt.Columns.Add(column);                 foreach (SP.List listitem in _lists)                 {                     row = dt.NewRow();                     row["List Title"] = listitem.Title;                     dt.Rows.Add(row);                 }                 dataGridView1.DataSource = dt;             }                   }       private void cmdload_Click(object sender, EventArgs e)         {             getsitetitle();             loadlist();          }     } } 7. That’s it. Hit F5 to run the application then click the Load Button. Your screen should like the one below. Hope this helps.

    Read the article

  • Delivering SOA Governance with EAMS and Oracle Enterprise Repository by Link Consulting Team

    - by JuergenKress
    In the last 12 years Link Consulting has been making its presence in specific areas such as Governance and Architecture, both in terms of practices and methodologies, products, know-how and technological expertise. The Enterprise Architecture Management System - Oracle Enterprise Edition (EAMS - OER Edition) is the result of this experience and combines the architecture management solution with OER in order to deliver a product specialized for SOA Governance that gathers the better of two worlds in solution that enables SOA Governance projects, initiatives and programs. Enterprise Architecture Management System Enterprise Architecture Management System (EAMS), is an automation based solution that enables the efficient management of Enterprise Architectures. The solution uses configured enterprise repositories and takes advantages of its features to provide automation capabilities to the users. EAMS provides capabilities to create/customize/analyze repository data, architectural blueprints, reports and analytic charts. Oracle Enterprise Repository Oracle Enterprise Repository (OER) is one of the major and central elements of the Oracle SOA Governance solution. Oracle Enterprise Repository provides the tools to manage and govern the metadata for any type of software asset, from business processes and services to patterns, frameworks, applications, components, and models. OER maps the relationships and inter-dependencies that connect those assets to improve impact analysis, promote and optimize their reuse, and measure their impact on the bottom line. It provides the visibility, feedback, controls, and analytics to keep your SOA on track to deliver business value. The intense focus on automation helps to overcome barriers to SOA adoption and streamline governance throughout the lifecycle. Core capabilities of the OER include: Asset Management Asset Lifecycle Management Usage Tracking Service Discovery Version Management Dependency Analysis Portfolio Management EAMS - OER Edition The solution takes the advantages and features from both products and combines them in a symbiotic tool that enhances the quality of SOA Governance Initiatives and Programs. EAMS is able to produce a vast number of outputs by combining its analytical engine, SOA-specific configurations and the assets in OER and other related tools, catalogs and repositories. The configurations encompass not only the extendable parametrization of the metadata but also fully configurable blueprints, PowerPoint reports, charts and queries. The SOA blueprints The solution comes with a set of predefined architectural representations that help the organization better perceive their SOA landscape. More blueprints can be easily created in order to accommodate the organizations needs in terms of detail, audience and metadata. Charts & Dashboards The solution encompasses a set of predefined charts and dashboards that promote a more agile way to control and explore the assets. Time Based Visualization All representations are time bound, and with EAMS - OER you can truly govern SOA with a complete view of the Past, Present and Future; The solution delivers Gap Analysis, a project oriented approach while taking into consideration the As-Was, As-Is an To-Be. Time based visualization differentiating factors: Extensive automation and maintenance of architectural representations Organization wide solution. Easy access and navigation to and between all architectural artifacts and representations. Flexible meta-model, customization and extensibility capabilities. Lifecycle management and enforcement of the time dimension over all the repository content. Profile based customization. Comprehensive visibility Architectural alignment Friendly and striking user interfaces For more information on EAMS visit us here. For more information on SOA visit us here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: Link Consulting,OER,OSR,SOA Governance,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • SQL SERVER – Last Two Days to Get FREE Book – Joes 2 Pros Certification 70-433

    - by pinaldave
    Earlier this week we announced that we will be giving away FREE SQL Wait Stats book to everybody who will get SQL Server Joes 2 Pros Combo Kit. We had a fantastic response to the contest. We got an overwhelming response to the offer. We knew there would be a great response but we want to honestly say thank you to all of you for making it happen. Rick and I want to make sure that we express our special thanks to all of you who are reading our books. The offer is still on and there are two more days to avail this offer. We want to make sure that everybody who buys our most selling combo kits, we will send our other most popular SQL Wait Stats book. Please read all the details of the offer here. The books are great resources for anyone who wants to learn SQL Server from fundamentals and eventually go on the certification path of 70-433. Exam 70-433 contains following important subject and the book covers the subject of fundamental. If you are taking the exam or not taking the exam – this book is for every SQL Developer to learn the subject from fundamentals.  Create and alter tables. Create and alter views. Create and alter indexes. Create and modify constraints. Implement data types. Implement partitioning solutions. Create and alter stored procedures. Create and alter user-defined functions (UDFs). Create and alter DML triggers. Create and alter DDL triggers. Create and deploy CLR-based objects. Implement error handling. Manage transactions. Query data by using SELECT statements. Modify data by using INSERT, UPDATE, and DELETE statements. Return data by using the OUTPUT clause. Modify data by using MERGE statements. Implement aggregate queries. Combine datasets. INTERSECT, EXCEPT Implement subqueries. Implement CTE (common table expression) queries. Apply ranking functions. Control execution plans. Manage international considerations. Integrate Database Mail. Implement full-text search. Implement scripts by using Windows PowerShell and SQL Server Management Objects (SMOs). Implement Service Broker solutions. Track data changes. Data capture Retrieve relational data as XML. Transform XML data into relational data. Manage XML data. Capture execution plans. Collect output from the Database Engine Tuning Advisor. Collect information from system metadata. Availability of Book USA - Amazon | India - Flipkart | Indiaplaza Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Vertical Scrolling In Tile Based XNA Platformer

    - by alec100_94
    I'm making a 2D platformer in XNA 4.0. I have created a working tile engine, which works well for my purposes, and Horizontal Scrolling works flawlessly, however I am having great trouble with Vertical scrolling. I Basically want the camera to scroll up (world to scroll down) when the player reaches a certain Y co-ordinate, and I would also like to automatically scroll back down if coming down, and that co-ordinate is passed. My biggest problem is I have no real way of detecting the direction the player is moving in using only the Y Co-ord. Here Is My Code Code For The Camera Class (which appears to be a very different approach to most camera classes I have seen). using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; namespace Marvin { class Camera : TileEngine { public static bool startReached; public static bool endReached; public static void MoveRight(float speed = 2) { //Moves The Position of Each Tile Right foreach (Tile t in tiles) { if(t!=null) { t.position.X -= speed; } } } public static void MoveLeft(float speed = 2) { //Moves The Position of Each Tile Right foreach (Tile t in tiles) { if(t!=null) { t.position.X += speed; } } } public static void MoveUp(float speed = 2) { foreach (Tile t in tiles) { if(t!=null) { t.position.Y += speed; } } } public static void MoveDown(float speed = 2) { foreach (Tile t in tiles) { if(t!=null) { t.position.Y -= speed; } } } public static void Restrain() { if(tiles.Last().position.X<Main.graphics.PreferredBackBufferWidth-tiles.Last().size.X) { MoveLeft(); endReached = true; } else { endReached = false; } if(tiles[1].position.X>0) { MoveRight(); startReached = true;} else { startReached = false; } } } } Here is My Player Code for Left and Right Scrolling/Moving if (Main.currentKeyState.IsKeyDown(Keys.Right)) { Camera.MoveRight(); if(Camera.endReached) { MoveRight(2); } else { if(marvin.GetRectangle().X!=Main.graphics.PreferredBackBufferWidth-(marvin.GetRectangle().X+marvin.GetRectangle().Width)) { MoveRight(2); Camera.MoveLeft(); } } } if(Main.currentKeyState.IsKeyDown(Keys.Left)) { Camera.MoveLeft(); if(Camera.startReached) { MoveLeft(2); } else { if(marvin.GetRectangle().X!=Main.graphics.PreferredBackBufferWidth-(marvin.GetRectangle().X+marvin.GetRectangle().Width)) { MoveLeft(2); Camera.MoveRight(); } } } Camera.Restrain(); if(marvin.GetRectangle().X>Main.graphics.PreferredBackBufferWidth-marvin.GetRectangle().Width) { MoveLeft(2); } if(marvin.GetRectangle().X<0) { MoveRight(2); } And Here Is My Player Jumping/Falling Code which may cause some conflicts with the vertical camera movement. if (!jumping) { if(!TileEngine.TopOfTileCollidingWith(footBounds)) { MoveDown(5); } else { if(marvin.GetRectangle().Y != TileEngine.LastPlatformStoodOnTop()-marvin.GetRectangle().Height) { float difference = (TileEngine.LastPlatformStoodOnTop()-marvin.GetRectangle().Height) - (marvin.GetRectangle().Y); marvin.SetRectangle(marvin.GetRectangle().X,(int)(marvin.GetRectangle().Y+difference)); armR.SetRectangle(armR.GetRectangle().X,(int)(armR.GetRectangle().Y+difference)); armL.SetRectangle(armL.GetRectangle().X,(int)(armL.GetRectangle().Y+difference)); eyeL.SetRectangle(eyeL.GetRectangle().X,(int)(eyeL.GetRectangle().Y+difference)); eyeR.SetRectangle(eyeR.GetRectangle().X,(int)(eyeR.GetRectangle().Y+difference)); } } } if (Main.currentKeyState.IsKeyDown(Keys.Up) && Main.previousKeyState.IsKeyUp(Keys.Up) && TileEngine.TopOfTileCollidingWith(footBounds)) { jumping = true; } if(jumping) { if(TileEngine.LastPlatformStoodOnTop()>0 && (TileEngine.LastPlatformStoodOnTop() - footBounds.Bottom)<120) { MoveUp(5); } else { jumping = false; } } All player code I have tried for vertical movements has failed, or caused weird results (like falling through platforms), and most have been a variation on the method I described above, hence I have not included it. I would really appreciate some help implementing a simple vertical scrolling into this game, Thanks.

    Read the article

  • Don't Forget To Enjoy Life

    - by Justin
    I have a pretty clear stance on posting personal information in my blogs. I tend to avoid it almost instinctively. Part of that is because I am a somewhat private person. And the other is because I know how easy it is for personal information to be gathered and collected from sources such as blogs. So, this has remained a tech only blog for me. I've only posted topics mostly related to issues I have encountered at work. In a way this blog is a 'bookmark' for me. If I post something here and run into the issue again it allows me to refer back to a convenient place where the 'fix' is documented in a way that I understand. But today, I am posting something that speaks to everyone. Something PERSONAL. Honestly, I expect this entry to receive zero views. But if nothing else, I can come back to this blog one day when I'm having a bad day or something and run across this post. And I will be reminded... DON'T FORGET TO ENJOY LIFE. Say this to yourself out loud, right now. People, we can get caught up in some rather mundane details as we trek through life. It's so easy to lose track of what really matters that it should be no surprise to find yourself reading something like this and thinking to yourself 'Yeah. You are right, man. Some of this crap I'm clinging on to right now is so small in the grand scheme of things'. I have no reservation, no shame, in saying that I am more often than not caught up in the ever evolving world of 'shit that does not matter'. When you work in technology, you are surrounded by deadlines, upgrades, new versions, support 'end of life', etc. And by time you get done with your 8 hours you go home and put in a few more because you are STILL CAUGHT UP in the things you dealt with at work all day. DO YOURSELF A FAVOR. DO YOUR FAMILY AND FRIEND A FAVOR. When you are done for the day, and you drive home, get those work-related things out of your head before you pull into the driveway. If you are still thinking on them when you park the car, leave the engine running, close your eyes and take a deep breath. If you believe in God, pray. If you don't then meditate for a second with the INTENTION of letting go of the day and becoming the 'real you'. You may have forgotten who the real you is so I'll remind you.... THE REAL YOU IS THAT GUY OR GAL THAT LAUGHS, LOVES, AND LIVES. Be the real you as often as possible. If you can't do it during your 9 - 5, do it at home. YOUR RELATIONSHIPS AND YOUR PERSONAL HAPPINESS DEPEND ON IT. I am going to make you a promise right now. If you do what I've just said, your days will be longer and your joy will be exponential. I can't explain why I know this to be true. But I do know it. And if you are there reading this right now, you know it is true too. We both know it is true because it COMES FROM WITHIN EVERY MAN, WOMAN and CHILD. We are born into love and happiness. Lets not fade away into the darkness so easily found in this world. Lets keep the flame burning. The flame of passion. Passion for LIFE. Peace be with you.

    Read the article

  • How to retrieve Sharepoint data from a Windows Forms Application.

    - by Michael M. Bangoy
    In this demo I'm going to demonstrate how to retrieve Sharepoint data and display it on a Windows Forms Application. 1. Open Visual Studio 2010 and create a new Project. 2. In the project template select Windows Forms Application. 3. In order to communicate with Sharepoint from a Windows Forms Application we need to add the 2 Sharepoint Client DLL located in c:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\ISAPI. 4. Select the Microsoft.Sharepoint.Client.dll and Microsoft.Sharepoint.Client.Runtime.dll. That's it we're ready to write our codes. Note: In this example I've added to controls on the form, the controls are Button, TextBox, Label and DataGridView. using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Data.Objects; using System.Drawing; using System.Linq; using System.Text; using System.Security; using System.Windows.Forms; using SP = Microsoft.SharePoint.Client; namespace ClientObjectModel { public partial class Form1 : Form { // declare string url of the Sharepoint site string _context = "theurlofyoursharepointsite"; public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { } private void getsitetitle() {    SP.ClientContext context = new SP.ClientContext(_context);    SP.Web _site = context.Web;    context.Load(_site);    context.ExecuteQuery();    txttitle.Text = _site.Title;    context.Dispose(); } private void loadlist() { using (SP.ClientContext _clientcontext = new SP.ClientContext(_context)) {    SP.Web _web = _clientcontext.Web;    SP.ListCollection _lists = _clientcontext.Web.Lists;    _clientcontext.Load(_lists);    _clientcontext.ExecuteQuery();    DataTable dt = new DataTable();    DataColumn column;    DataRow row;    column = new DataColumn();    column.DataType = Type.GetType("System.String");    column.ColumnName = "List Title";    dt.Columns.Add(column);    foreach (SP.List listitem in _lists)    {       row = dt.NewRow();       row["List Title"] = listitem.Title;       dt.Rows.Add(row);    }       dataGridView1.DataSource = dt;    } private void cmdload_Click(object sender, EventArgs e) { getsitetitle(); loadlist(); } } That's it. Running the application and clicking the Load Button will retrieve the Title of the Sharepoint site and display it on the TextBox and also it will retrieve ALL of the Sharepoint List on that site and populate the DataGridView with the List Title. Hope this helps. Thank you.

    Read the article

  • The Best Data Integration for Exadata Comes from Oracle

    - by maria costanzo
    Oracle Data Integrator and Oracle GoldenGate offer unique and optimized data integration solutions for Oracle Exadata. For example, customers that choose to feed their data warehouse or reporting database with near real-time throughout the day, can do so without decreasing  performance or availability of source and target systems. And if you ask why real-time, the short answer is: in today’s fast-paced, always-on world, business decisions need to use more relevant, timely data to be able to act fast and seize opportunities. A longer response to "why real-time" question can be found in a related blog post. If we look at the solution architecture, as shown on the diagram below,  Oracle Data Integrator and Oracle GoldenGate are both uniquely designed to take full advantage of the power of the database and to eliminate unnecessary middle-tier components. Oracle Data Integrator (ODI) is the best bulk data loading solution for Exadata. ODI is the only ETL platform that can leverage the full power of Exadata, integrate directly on the Exadata machine without any additional hardware, and by far provides the simplest setup and fastest overall performance on an Exadata system. We regularly see customers achieving a 5-10 times boost when they move their ETL to ODI on Exadata. For  some companies the performance gain is even much higher. For example a large insurance company did a proof of concept comparing ODI vs a traditional ETL tool (one of the market leaders) on Exadata. The same process that was taking 5hrs and 11 minutes to complete using the competing ETL product took 7 minutes and 20 seconds with ODI. Oracle Data Integrator was 42 times faster than the conventional ETL when running on Exadata.This shows that Oracle's own data integration offering helps you to gain the most out of your Exadata investment with a truly optimized solution. GoldenGate is the best solution for streaming data from heterogeneous sources into Exadata in real time. Oracle GoldenGate can also be used together with Data Integrator for hybrid use cases that also demand non-invasive capture, high-speed real time replication. Oracle GoldenGate enables real-time data feeds from heterogeneous sources non-invasively, and delivers to the staging area on the target Exadata system. ODI runs directly on Exadata to use the database engine power to perform in-database transformations. Enterprise Data Quality is integrated with Oracle Data integrator and enables ODI to load trusted data into the data warehouse tables. Only Oracle can offer all these technical benefits wrapped into a single intelligence data warehouse solution that runs on Exadata. Compared to traditional ETL with add-on CDC this solution offers: §  Non-invasive data capture from heterogeneous sources and avoids any performance impact on source §  No mid-tier; set based transformations use database power §  Mini-batches throughout the day –or- bulk processing nightly which means maximum availability for the DW §  Integrated solution with Enterprise Data Quality enables leveraging trusted data in the data warehouse In addition to Starwood Hotels and Resorts, Morrison Supermarkets, United Kingdom’s fourth-largest food retailer, has seen the power of this solution for their new BI platform and shared their story with us. Morrisons needed to analyze data across a large number of manufacturing, warehousing, retail, and financial applications with the goal to achieve single view into operations for improved customer service. The retailer deployed Oracle GoldenGate and Oracle Data Integrator to bring new data into Oracle Exadata in near real-time and replicate the data into reporting structures within the data warehouse—extending visibility into operations. Using Oracle's data integration offering for Exadata, Morrisons produced financial reports in seconds, rather than minutes, and improved staff productivity and agility. You can read more about Morrison’s success story here and hear from Starwood here. From an Irem Radzik article.

    Read the article

  • Using Solaris zfs + iscsi targets with Oracle VM

    - by wim.coekaerts
    I was playing with my Oracle VM setup and needed some shared storage that was block based. I did not have a storage array available but I did have a solaris box, that I use for Oracle VDI, available. I set up a few iscsi targets on this solaris server and exported them to my 2 Oracle VM servers. Here's how I did this : (1) On the solaris side : # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT rpool 544G 129G 415G 23% ONLINE - I just have a simple zpool, called rpool, on this box. It has plenty of space available for my needs. So I will use rpool and I will create 5 50gb vols : zfs create -V 50G rpool/ovm1 zfs create -V 50G rpool/ovm2 zfs create -V 50G rpool/ovm3 zfs create -V 50G rpool/ovm4 zfs create -V 50G rpool/ovm5 I want to use these volumes for iscsi so I have to enable them as shared iscsi devices : zfs set shareiscsi=on rpool/ovm1 zfs set shareiscsi=on rpool/ovm2 zfs set shareiscsi=on rpool/ovm3 zfs set shareiscsi=on rpool/ovm4 zfs set shareiscsi=on rpool/ovm5 The command iscsitadm list target should list these devices so make sure they show up. # iscsitadm list target Target: rpool/ovm1 iSCSI Name: iqn.1986-03.com.sun:02:896c766c-0943-4da5-d47e-9575b5a0be36 Connections: 2 Target: rpool/ovm2 iSCSI Name: iqn.1986-03.com.sun:02:a3116b46-73e0-e8c2-e80c-9a4f71aff069 Connections: 2 Target: rpool/ovm3 iSCSI Name: iqn.1986-03.com.sun:02:a838c400-2730-c0d6-f2c2-ee186a0261c1 Connections: 2 Target: rpool/ovm4 iSCSI Name: iqn.1986-03.com.sun:02:2e046afb-d66d-4f3f-c5de-8115e0ddd931 Connections: 2 Target: rpool/ovm5 iSCSI Name: iqn.1986-03.com.sun:02:66109fbe-81ac-ef05-f85e-ab8c1f34cb43 Connections: 2 At this point I want to make sure that I have some access control on these devices. To make it easier, I will create an alias for my 2 servers and use the alias for the ACL. get the iqn from the 2 servers on my 2 ovm servers (wcoekaer-srv1, wcoekaer-srv2) get the content of /etc/iscsi/initiatorname.iscsi (for each server) InitiatorName=iqn.1986-03.com.sun:01:2a7526f0ffff On the solaris side create the aliases : iscsitadm create initiator -n iqn.1986-03.com.sun:01:2a7526f0ffff wcoekaer-srv1 iscsitadm create initiator -n iqn.1986-03.com.sun:01:e31b08110f1 wcoekaer-srv5 Add the ACL to the targets : iscsitadm modify target -l wcoekaer-srv1 rpool/ovm1 iscsitadm modify target -l wcoekaer-srv1 rpool/ovm2 iscsitadm modify target -l wcoekaer-srv1 rpool/ovm3 iscsitadm modify target -l wcoekaer-srv1 rpool/ovm4 iscsitadm modify target -l wcoekaer-srv1 rpool/ovm5 iscsitadm modify target -l wcoekaer-srv5 rpool/ovm1 iscsitadm modify target -l wcoekaer-srv5 rpool/ovm2 iscsitadm modify target -l wcoekaer-srv5 rpool/ovm3 iscsitadm modify target -l wcoekaer-srv5 rpool/ovm4 iscsitadm modify target -l wcoekaer-srv5 rpool/ovm5 (2) the Oracle VM side On each server just do 2 simple things : # iscsiadm -m discovery -t sendtargets -p ca-vdi1 where ca-vdi1 is my solaris server name # service iscsi restart When I do cat /proc/partitions on my servers I will see the devices show up # cat /proc/partitions major minor #blocks name 8 0 160836480 sda 8 1 104391 sda1 8 2 3148740 sda2 8 3 1052257 sda3 253 0 6377804 dm-0 253 1 6377804 dm-1 253 2 6377804 dm-2 8 16 52428800 sdb 8 32 52428800 sdc 8 48 52428800 sdd 8 80 52428800 sdf 8 64 52428800 sde These 5 new devices sd[b..f] are shared storage for Oracle VM and can be used to pass through to the VM's as phy: devices or put ocfs2 on it and use as shared filesystem storage for dom0 repositories. I am setting up an 11gR2 rac template (the cool stuff Saar did) so I am using my devices to create a 2 node RAC cluster with phy: devices.

    Read the article

  • SQL Server Optimizer Malfunction?

    - by Tony Davis
    There was a sharp intake of breath from the audience when Adam Machanic declared the SQL Server optimizer to be essentially "stuck in 1997". It was during his fascinating "Query Tuning Mastery: Manhandling Parallelism" session at the recent PASS SQL Summit. Paraphrasing somewhat, Adam (blog | @AdamMachanic) offered a convincing argument that the optimizer often delivers flawed plans based on assumptions that are no longer valid with today’s hardware. In 1997, when Microsoft engineers re-designed the database engine for SQL Server 7.0, SQL Server got its initial implementation of a cost-based optimizer. Up to SQL Server 2000, the developer often had to deploy a steady stream of hints in SQL statements to combat the occasionally wilful plan choices made by the optimizer. However, with each successive release, the optimizer has evolved and improved in its decision-making. It is still prone to the occasional stumble when we tackle difficult problems, join large numbers of tables, perform complex aggregations, and so on, but for most of us, most of the time, the optimizer purrs along efficiently in the background. Adam, however, challenged further any assumption that the current optimizer is competent at providing the most efficient plans for our more complex analytical queries, and in particular of offering up correctly parallelized plans. He painted a picture of a present where complex analytical queries have become ever more prevalent; where disk IO is ever faster so that reads from disk come into buffer cache faster than ever; where the improving RAM-to-data ratio means that we have a better chance of finding our data in cache. Most importantly, we have more CPUs at our disposal than ever before. To get these queries to perform, we not only need to have the right indexes, but also to be able to split the data up into subsets and spread its processing evenly across all these available CPUs. Improvements such as support for ColumnStore indexes are taking things in the right direction, but, unfortunately, deficiencies in the current Optimizer mean that SQL Server is yet to be able to exploit properly all those extra CPUs. Adam’s contention was that the current optimizer uses essentially the same costing model for many of its core operations as it did back in the days of SQL Server 7, based on assumptions that are no longer valid. One example he gave was a "slow disk" bias that may have been valid back in 1997 but certainly is not on modern disk systems. Essentially, the optimizer assesses the relative cost of serial versus parallel plans based on the assumption that there is no IO cost benefit from parallelization, only CPU. It assumes that a single request will saturate the IO channel, and so a query would not run any faster if we parallelized IO because the disk system simply wouldn’t be able to handle the extra pressure. As such, the optimizer often decides that a serial plan is lower cost, often in cases where a parallel plan would improve performance dramatically. It was challenging and thought provoking stuff, as were his techniques for driving parallelism through query logic based on subsets of rows that define the "grain" of the query. I highly recommend you catch the session if you missed it. I’m interested to hear though, when and how often people feel the force of the optimizer’s shortcomings. Barring mistakes, such as stale statistics, how often do you feel the Optimizer fails to find the plan you think it should, and what are the most common causes? Is it fighting to induce it toward parallelism? Combating unexpected plans, arising from table partitioning? Something altogether more prosaic? Cheers, Tony.

    Read the article

  • PASS Summit 2012: keynote and Mobile BI announcements #sqlpass

    - by Marco Russo (SQLBI)
    Today at PASS Summit 2012 there have been several announcements during the keynote. Moreover, other news have not been highlighted in the keynote but are equally if not more important for the BI community. Let’s start from the big news in the keynote (other details on SQL Server Blog): Hekaton: this is the codename for in-memory OLTP technology that will appear (I suppose) in the next release of the SQL Server relational engine. The improvement in performance and scalability is impressive and it enables new scenarios. I’m curious to see whether it can be used also to improve ETL performance and how it differs from using SSD technology. Updates on Columnstore: In the next major release of SQL Server the columnstore indexes will be updatable and it will be possible to create a clustered index with Columnstore index. This is really a great news for near real-time reporting needs! Polybase: in 2013 it will debut SQL Server 2012 Parallel Data Warehouse (PDW), which will include the Polybase technology. By using Polybase a single T-SQL query will run queries across relational data and Hadoop data. A single query language for both. Sounds really interesting for using BigData in a more integrated way with existing relational databases. And, of course, to load a data warehouse using BigData, which is the ultimate goal that we all BI Pro have, right? SQL Server 2012 SP1: the Service Pack 1 for SQL Server 2012 is available now and it enable the use of PowerPivot for SharePoint and Power View on a SharePoint 2013 installation with Excel 2013. Power View works with Multidimensional cube: the long-awaited feature of being able to use PowerPivot with Multidimensional cubes has been shown by Amir Netz in an amazing demonstration during the keynote. The interesting thing is that the data model behind was based on a many-to-many relationship (something that is not fully supported by Power View with Tabular models). Another interesting aspect is that it is Analysis Services 2012 that supports DAX queries run on a Multidimensional model, enabling the use of any future tool generating DAX queries on top of a Multidimensional model. There are still no info about availability by now, but this is *not* included in SQL Server 2012 SP1. So what about Mobile BI? Well, even if not announced during the keynote, there is a dedicated session on this topic and there are very important news in this area: iOS, Android and Microsoft mobile platforms: the commitment is to get data exploration and visualization capabilities working within June 2013. This should impact at least Power View and SharePoint/Excel Services. This is the type of UI experience we are all waiting for, in order to satisfy the requests coming from users and customers. The important news here is that native applications will be available for both iOS and Windows 8 so it seems that Android will be supported initially only through the web. Unfortunately we haven’t seen any demo, so it’s not clear what will be the offline navigation experience (and whether there will be one). But at least we know that Microsoft is working on native applications in this area. I’m not too surprised that HTML5 is not the magic bullet for all the platforms. The next PASS Business Analytics conference in 2013 seems a good place to see this in action, even if I hope we don’t have to wait other six months before seeing some demo of native BI applications on mobile platforms! Viewing Reporting Services reports on iPad is supported starting with SQL Server 2012 SP1, which has been released today. This is another good reason to install SP1 on SQL Server 2012. If you are at PASS Summit 2012, come and join me, Alberto Ferrari and Chris Webb at our book signing event tomorrow, Thursday 8 2012, at the bookstore between 12:00pm and 12:30pm, or follow one of our sessions!

    Read the article

  • Velocity collision detection (2D)

    - by ultifinitus
    Alright, so I have made a simple game engine (see youtube) And my current implementation of collision resolution has a slight problem, involving the velocity of a platform. Basically I run through all of the objects necessary to detect collisions on and resolve those collisions as I find them. Part of that resolution is setting the player's velocity = the platform's velocity. Which works great! Unless I have a row of platforms moving at different velocities or a platform between a stack of tiles.... (current system) bool player::handle_collisions() { collisions tcol; bool did_handle = false; bool thisObjectHandle = false; for (int temp = 0; temp < collideQueue.size(); temp++) { thisObjectHandle = false; tcol = get_collision(prevPos.x,y,get_img()->get_width(),get_img()->get_height(), collideQueue[temp]->get_position().x,collideQueue[temp]->get_position().y, collideQueue[temp]->get_img()->get_width(),collideQueue[temp]->get_img()->get_height()); if (prevPos.y >= collideQueue[temp]->get_prev_pos().y + collideQueue[temp]->get_img()->get_height()) if (tcol.top > 0) { add_pos(0,tcol.top); set_vel(get_vel().x,collideQueue[temp]->get_vel().y); thisObjectHandle = did_handle = true; } if (prevPos.y + get_img()->get_height() <= collideQueue[temp]->get_prev_pos().y) if (tcol.bottom > 0) { add_pos(collideQueue[temp]->get_vel().x,-tcol.bottom); set_vel(get_vel().x/*collideQueue[temp]->get_vel().x*/,collideQueue[temp]->get_vel().y); ableToJump = true; jumpTimes = maxjumpable; thisObjectHandle = did_handle = true; } /// /// ADD CODE FROM NEXT CODE BLOCK HERE (on forum, not in code) /// } for (int temp = 0; temp < collideQueue.size(); temp++) { thisObjectHandle = false; tcol = get_collision(x,y,get_img()->get_width(),get_img()->get_height(), collideQueue[temp]->get_position().x,collideQueue[temp]->get_position().y, collideQueue[temp]->get_img()->get_width(),collideQueue[temp]->get_img()->get_height()); if (prevPos.x + get_img()->get_width() <= collideQueue[temp]->get_prev_pos().x) if (tcol.left > 0) { add_pos(-tcol.left,0); set_vel(collideQueue[temp]->get_vel().x,get_vel().y); thisObjectHandle = did_handle = true; } if (prevPos.x >= collideQueue[temp]->get_prev_pos().x + collideQueue[temp]->get_img()->get_width()) if (tcol.right > 0) { add_pos(tcol.right,0); set_vel(collideQueue[temp]->get_vel().x,get_vel().y); thisObjectHandle = did_handle = true; } } return did_handle; } (if I add the following code {where the comment to do so is}, which is glitchy, the above problem doesn't happen, though it brings others) if (!thisObjectHandle) { if (tcol.bottom > tcol.top) { add_pos(collideQueue[temp]->get_vel().x,-tcol.bottom); set_vel(get_vel().x,collideQueue[temp]->get_vel().y); } else if (tcol.top > tcol.bottom) { add_pos(0,tcol.top); set_vel(get_vel().x,collideQueue[temp]->get_vel().y); } } How would you change my system to prevent this?

    Read the article

  • When does a Tumbling Window Start in StreamInsight

    Whilst getting some courseware ready I was playing around writing some code and I decided to very simply show when a window starts and ends based on you asking for a TumblingWindow of n time units in StreamInsight.  I thought this was going to be a two second thing but what I found was something I haven’t yet found documented anywhere until now.   All this code is written in C# and will slot straight into my favourite quick-win dev tool LinqPad   Let’s first create a sample dataset   var EnumerableCollection = new [] { new {id = 1, StartTime = DateTime.Parse("2010-10-01 12:00:00 PM").ToLocalTime()}, new {id = 2, StartTime = DateTime.Parse("2010-10-01 12:20:00 PM").ToLocalTime()}, new {id = 3, StartTime = DateTime.Parse("2010-10-01 12:30:00 PM").ToLocalTime()}, new {id = 4, StartTime = DateTime.Parse("2010-10-01 12:40:00 PM").ToLocalTime()}, new {id = 5, StartTime = DateTime.Parse("2010-10-01 12:50:00 PM").ToLocalTime()}, new {id = 6, StartTime = DateTime.Parse("2010-10-01 01:00:00 PM").ToLocalTime()}, new {id = 7, StartTime = DateTime.Parse("2010-10-01 01:10:00 PM").ToLocalTime()}, new {id = 8, StartTime = DateTime.Parse("2010-10-01 02:00:00 PM").ToLocalTime()}, new {id = 9, StartTime = DateTime.Parse("2010-10-01 03:20:00 PM").ToLocalTime()}, new {id = 10, StartTime = DateTime.Parse("2010-10-01 03:30:00 PM").ToLocalTime()}, new {id = 11, StartTime = DateTime.Parse("2010-10-01 04:40:00 PM").ToLocalTime()}, new {id = 12, StartTime = DateTime.Parse("2010-10-01 04:50:00 PM").ToLocalTime()}, new {id = 13, StartTime = DateTime.Parse("2010-10-01 05:00:00 PM").ToLocalTime()}, new {id = 14, StartTime = DateTime.Parse("2010-10-01 05:10:00 PM").ToLocalTime()} };   Now let’s create a stream of point events   var inputStream = EnumerableCollection .ToPointStream(Application,evt=> PointEvent .CreateInsert(evt.StartTime,evt),AdvanceTimeSettings.StrictlyIncreasingStartTime);   Now we can create our windows over the stream.  The first window we will create is a one hour tumbling window.  We’'ll count the events in the window but what we do here is not the point, the point is our window edges.   var windowedStream = from win in inputStream.TumblingWindow(TimeSpan.FromHours(1),HoppingWindowOutputPolicy.ClipToWindowEnd) select new {CountOfEntries = win.Count()};   Now we can have a look at what we get.  I am only going to show the first non Cti event as that is enough to demonstrate what is going on   windowedStream.ToIntervalEnumerable().First(e=> e.EventKind == EventKind.Insert).Dump("First Row from Windowed Stream");   The results are below   EventKind Insert   StartTime 01/10/2010 12:00   EndTime 01/10/2010 13:00     { CountOfEntries = 5 }   Payload CountOfEntries 5   Now this makes sense and is quite often the width of window specified in examples.  So what happens if I change the windowing code now to var windowedStream = from win in inputStream.TumblingWindow(TimeSpan.FromHours(5),HoppingWindowOutputPolicy.ClipToWindowEnd) select new {CountOfEntries = win.Count()}; Now where does your window start?  What about   var windowedStream = from win in inputStream.TumblingWindow(TimeSpan.FromMinutes(13),HoppingWindowOutputPolicy.ClipToWindowEnd) select new {CountOfEntries = win.Count()};   Well for the first example your window will start at 01/10/2010 10:00:00 , and for the second example it will start at  01/10/2010 11:55:00 Surprised?   Here is the reason why and thanks to the StreamInsight team for listening.   Windows start at TimeSpan.MinValue. Windows are then created from that point onwards of the size you specified in your code.  If a window contains no events they are not produced by the engine to the output.  This is why window start times can be before the first event is created.

    Read the article

  • Solaris 11.1: Encrypted Immutable Zones on (ZFS) Shared Storage

    - by darrenm
    Solaris 11 brought both ZFS encryption and the Immutable Zones feature and I've talked about the combination in the past.  Solaris 11.1 adds a fully supported method of storing zones in their own ZFS using shared storage so lets update things a little and put all three parts together. When using an iSCSI (or other supported shared storage target) for a Zone we can either let the Zones framework setup the ZFS pool or we can do it manually before hand and tell the Zones framework to use the one we made earlier.  To enable encryption we have to take the second path so that we can setup the pool with encryption before we start to install the zones on it. We start by configuring the zone and specifying an rootzpool resource: # zonecfg -z eizoss Use 'create' to begin configuring a new zone. zonecfg:eizoss> create create: Using system default template 'SYSdefault' zonecfg:eizoss> set zonepath=/zones/eizoss zonecfg:eizoss> set file-mac-profile=fixed-configuration zonecfg:eizoss> add rootzpool zonecfg:eizoss:rootzpool> add storage \ iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 zonecfg:eizoss:rootzpool> end zonecfg:eizoss> verify zonecfg:eizoss> commit zonecfg:eizoss> Now lets create the pool and specify encryption: # suriadm map \ iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 PROPERTY VALUE mapped-dev /dev/dsk/c10t600144F09ACAACD20000508E64A70001d0 # echo "zfscrypto" > /zones/p # zpool create -O encryption=on -O keysource=passphrase,file:///zones/p eizoss \ /dev/dsk/c10t600144F09ACAACD20000508E64A70001d0 # zpool export eizoss Note that the keysource example above is just for this example, realistically you should probably use an Oracle Key Manager or some other better keystorage, but that isn't the purpose of this example.  Note however that it does need to be one of file:// https:// pkcs11: and not prompt for the key location.  Also note that we exported the newly created pool.  The name we used here doesn't actually mater because it will get set properly on import anyway. So lets go ahead and do our install: zoneadm -z eizoss install -x force-zpool-import Configured zone storage resource(s) from: iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 Imported zone zpool: eizoss_rpool Progress being logged to /var/log/zones/zoneadm.20121029T115231Z.eizoss.install Image: Preparing at /zones/eizoss/root. AI Manifest: /tmp/manifest.xml.ujaq54 SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: eizoss Installation: Starting ... Creating IPS image Startup linked: 1/1 done Installing packages from: solaris origin: http://pkg.us.oracle.com/solaris/release/ Please review the licenses for the following packages post-install: consolidation/osnet/osnet-incorporation (automatically accepted, not displayed) Package licenses may be viewed using the command: pkg info --license <pkg_fmri> DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 187/187 33575/33575 227.0/227.0 384k/s PHASE ITEMS Installing new actions 47449/47449 Updating package state database Done Updating image state Done Creating fast lookup database Done Installation: Succeeded Note: Man pages can be obtained by installing pkg:/system/manual done. Done: Installation completed in 929.606 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /zones/eizoss/root/var/log/zones/zoneadm.20121029T115231Z.eizoss.install That was really all we had to do, when the install is done boot it up as normal. The zone administrator has no direct access to the ZFS wrapping keys used for the encrypted pool zone is stored on.  Due to how inheritance works in ZFS he can still create new encrypted datasets that use those wrapping keys (without them ever being inside a process in the zone) or he can create encrypted datasets inside the zone that use keys of his own choosing, the output below shows the two cases: rpool is inheriting the key material from the global zone (note we can see the value of the keysource property but we don't use it inside the zone nor does that path need to be (or is) accessible inside the zone). Whereas rpool/export/home/bob has set keysource locally. # zfs get encryption,keysource rpool rpool/export/home/bob NAME PROPERTY VALUE SOURCE rpool encryption on inherited from $globalzone rpool keysource passphrase,file:///zones/p inherited from $globalzone rpool/export/home/bob encryption on local rpool/export/home/bob keysource passphrase,prompt local  

    Read the article

  • JOGL2 test compiles, but doesn't execute - help?

    - by Chuchinyi
    I have a problem with JOGL2. My JOGL2Template.java compiles fine, but executing it results in the following error: D:\java\java\jogl>javac JOGL2Template.java <== compile ok D:\java\java\jogl>java JOGL2Template <== execute error Exception in thread "main" java.lang.ExceptionInInitializerError at javax.media.opengl.GLProfile.<clinit>(GLProfile.java:1176) at JOGL2Template.<init>(JOGL2Template.java:24) at JOGL2Template.main(JOGL2Template.java:57) Caused by: java.lang.SecurityException: no certificate for gluegen-rt.dll in D:\ java\lib\gluegen-rt-natives-windows-i586.jar at com.jogamp.common.util.JarUtil.validateCertificate(JarUtil.java:350) at com.jogamp.common.util.JarUtil.validateCertificates(JarUtil.java:324) at com.jogamp.common.util.cache.TempJarCache.validateCertificates(TempJa rCache.java:328) at com.jogamp.common.util.cache.TempJarCache.bootstrapNativeLib(TempJarC ache.java:283) at com.jogamp.common.os.Platform$3.run(Platform.java:308) at java.security.AccessController.doPrivileged(Native Method) at com.jogamp.common.os.Platform.loadGlueGenRTImpl(Platform.java:298) at com.jogamp.common.os.Platform.<clinit>(Platform.java:207) ... 3 more Here is the JOGL2Template.java source code: import java.awt.Dimension; import java.awt.Frame; import java.awt.event.WindowAdapter; import java.awt.event.WindowEvent; import javax.media.opengl.GLAutoDrawable; import javax.media.opengl.GLCapabilities; import javax.media.opengl.GLEventListener; import javax.media.opengl.GLProfile; import javax.media.opengl.awt.GLCanvas; import com.jogamp.opengl.util.FPSAnimator; import javax.swing.JFrame; /* * JOGL 2.0 Program Template For AWT applications */ public class JOGL2Template extends JFrame implements GLEventListener { private static final int CANVAS_WIDTH = 640; // Width of the drawable private static final int CANVAS_HEIGHT = 480; // Height of the drawable private static final int FPS = 60; // Animator's target frames per second // Constructor to create profile, caps, drawable, animator, and initialize Frame public JOGL2Template() { // Get the default OpenGL profile that best reflect your running platform. GLProfile glp = GLProfile.getDefault(); // Specifies a set of OpenGL capabilities, based on your profile. GLCapabilities caps = new GLCapabilities(glp); // Allocate a GLDrawable, based on your OpenGL capabilities. GLCanvas canvas = new GLCanvas(caps); canvas.setPreferredSize(new Dimension(CANVAS_WIDTH, CANVAS_HEIGHT)); canvas.addGLEventListener(this); // Create a animator that drives canvas' display() at 60 fps. final FPSAnimator animator = new FPSAnimator(canvas, FPS); addWindowListener(new WindowAdapter() { // For the close button @Override public void windowClosing(WindowEvent e) { // Use a dedicate thread to run the stop() to ensure that the // animator stops before program exits. new Thread() { @Override public void run() { animator.stop(); System.exit(0); } }.start(); } }); add(canvas); pack(); setTitle("OpenGL 2 Test"); setVisible(true); animator.start(); // Start the animator } public static void main(String[] args) { new JOGL2Template(); } @Override public void init(GLAutoDrawable drawable) { // Your OpenGL codes to perform one-time initialization tasks // such as setting up of lights and display lists. } @Override public void display(GLAutoDrawable drawable) { // Your OpenGL graphic rendering codes for each refresh. } @Override public void reshape(GLAutoDrawable drawable, int x, int y, int w, int h) { // Your OpenGL codes to set up the view port, projection mode and view volume. } @Override public void dispose(GLAutoDrawable drawable) { // Hardly used. } } Any ideas what might be the cause of these errors?

    Read the article

  • Converting .docx to pdf (or .doc to pdf, or .doc to odt, etc.) with libreoffice on a webserver on the fly using php

    - by robertphyatt
    Ok, so I needed to convert .docx files to .pdf files on the fly, but none of the free php libraries that were available let me do it on my server (a webservice was not good enough). Basically either I needed to pay for a library (and have it maybe suck) or just deal with the free ones that didn't convert the formatting well enough. Not good enough! I found that LibreOffice (OpenOffice's successor) allows command line conversion using the LibreOffice conversion engine (which DID preserve the formatting like I wanted and generally worked great). I loaded the latest version of Ubuntu (http://www.ubuntu.com/download/ubuntu/download) onto my Virtual Box (https://www.virtualbox.org/wiki/Downloads) on my computer and found that I was able to easily convert files using the commandline like this: libreoffice --headless -convert-to pdf fileToConvert.docx -outdir output/path/for/pdf I thought: sweet...but I don't have admin rights on my host's web server. I tried to use a "portable" version of LibreOffice that I obtained from http://portablelinuxapps.org/ but I was unable to get it to work on my host's webserver, because my host's webserver didn't have all the dependencies (Dependency Hell! http://en.wikipedia.org/wiki/Dependency_hell) I was at a loss of how to make it work, until I ran across a cool project made by a Ph.D. student (Philip J. Guo) at Stanford called CDE: http://www.stanford.edu/~pgbovine/cde.html I will let you look at his explanations of how it works (I followed what he did in http://www.youtube.com/watch?feature=player_embedded&v=6XdwHo1BWwY, starting at about 32:00 as well as the directions on his site), but in short, it allows one to avoid dependency hell by copying all the files used when you run certain commands, recreating the linux environment where the command worked. I was able to use this to run LibreOffice without having to resort to someone's portable version of it, and it worked just like it did when I did it on Ubuntu with the command above, with a tweak: I needed to run the wrapper of LibreOffice the CDE generated. So, below is my PHP code that calls it. In this code snippet, the filename to be copied is passed in as $_POST["filename"]. I copy the file to the same spot where I originally converted the file, convert it, copy it back and then delete all the files (so that it doesn't start growing exponentially). I did it this way because I wasn't able to make it work otherwise on the webserver. If there is a linux + webserver ninja out there that can figure out how to make it work without doing this, I would be interested to know what you did. Please post a comment or something if you did that. <?php //first copy the file to the magic place where we can convert it to a pdf on the fly copy($time.$_POST["filename"], "../LibreOffice/cde-package/cde-root/home/robert/Desktop/".$_POST["filename"]); //change to that directory chdir('../LibreOffice/cde-package/cde-root/home/robert'); //the magic command that does the conversion $myCommand = "./libreoffice.cde --headless -convert-to pdf Desktop/".$_POST["filename"]." -outdir Desktop/"; exec ($myCommand); //copy the file back copy("Desktop/".str_replace(".docx", ".pdf", $_POST["filename"]), "../../../../../documents/".str_replace(".docx", ".pdf", $_POST["filename"])); //delete all the files out of the magic place where we can convert it to a pdf on the fly $files1 = scandir('Desktop'); //my files that I generated all happened to start with a number. $pattern = '/^[0-9]/'; foreach ($files1 as $value) { preg_match($pattern, $value, $matches); if(count($matches) ?> 0) { unlink("Desktop/".$value); } } //changing the header to the location of the file makes it work well on androids header( 'Location: '.str_replace(".docx", ".pdf", $_POST["filename"]) ); ?> And here is the tar.gz file I generated I generated with CDE. To duplicate what I did exactly, put the tar.gz file in a folder somewhere. I will call that folder the "root". Make a new folder called "documents" in the "root" folder. Unpack the tar.gz and run the php script above from the "documents" folder. Success! I made a truly portable version of LibreOffice that can convert files on the fly on a webserver using 100% free, open source software!

    Read the article

  • Rotating text using CSS

    - by Renso
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Goal: Rotating text using css only. How: Surprisingly IE supports this feature rather well. You could use property filters in IE, but since this is only supported on IE browsers, I would not recommend it. CSS3, still in proposal state, has a "writing-mode" property for doing this. It has been part of IE's browser engine since IE5.5. Now that it is part of the CSS3 draft specification, would be the best way to implement this going forward. Webkit based browsers; Firefox 3.5+, Opera 11 and IE9 implement this feature differently by utilizing the transform property. Without using third-party JavaScript or CSS properties, we can use the CSS3 "writing-mode" property, supported from IE5.5 up to IE8, the latter adding addition formatting options through -ms extensions. <style type="text/css"> .rightToLeft{ writing-mode: tb-rl; } </style> <p class="rightToLeft">This is my text</p> This will rotate the text 90 degrees, starting from the right to the left. Here are all the options: ·         lr-tb – Default value, left to right, top to bottom ·         rl-tb – Right to left, top to bottom ·         tb-rl – Vertically; top to bottom, right to left ·         bt-rl – Vertically; bottom to top, right to left ·         tb-lr – Available in IE8+: -ms-writing-mode; top to bottom, left to right ·         bt-lr – Bottom to top, left to right ·         lr-bt – Left to right, bottom to top What about Firefox, Safari, etc.? The following techniques need to be used on Webkit browsers like Firefox, Opera 11, Google Chrome and IE9. These browsers require their proprietary vendor extensions: -moz-, -webkit-, -o- and -ms-. -webkit-transform: rotate(90deg);    -moz-transform: rotate(90deg); -ms-transform: rotate(90deg); -o-transform: rotate(90deg); transform: rotate(90deg);

    Read the article

  • Non use of persisted data

    - by Dave Ballantyne
    Working at a client site, that in itself is good to say, I ran into a set of circumstances that made me ponder, and appreciate, the optimizer engine a bit more. Working on optimizing a stored procedure, I found a piece of code similar to : select BillToAddressID, Rowguid, dbo.udfCleanGuid(rowguid) from sales.salesorderheaderwhere BillToAddressID = 985 A lovely scalar UDF was being used,  in actuality it was used as part of the WHERE clause but simplified here.  Normally I would use an inline table valued function here, but in this case it wasn't a good option. So this seemed like a pretty good case to use a persisted column to improve performance. The supporting index was already defined as create index idxBill on sales.salesorderheader(BillToAddressID) include (rowguid) and the function code is Create Function udfCleanGuid(@GUID uniqueidentifier)returns varchar(255)with schemabindingasbegin Declare @RetStr varchar(255) Select @RetStr=CAST(@Guid as varchar(255)) Select @RetStr=REPLACE(@Retstr,'-','') return @RetStrend Executing the Select statement produced a plan of : Nothing surprising, a seek to find the data and compute scalar to execute the UDF. Lets get optimizing and remove the UDF with a persisted column Alter table sales.salesorderheaderadd CleanedGuid as dbo.udfCleanGuid(rowguid)PERSISTED A subtle change to the SELECT statement… select BillToAddressID,CleanedGuid from sales.salesorderheaderwhere BillToAddressID = 985 and our new optimized plan looks like… Not a lot different from before!  We are using persisted data on our table, where is the lookup to fetch it ?  It didnt happen,  it was recalculated.  Looking at the properties of the relevant Compute Scalar would confirm this ,  but a more graphic example would be shown in the profiler SP:StatementCompleted event. Why did the lookup happen ? Remember the index definition,  it has included the original guid to avoid the lookup.  The optimizer knows this column will be passed into the UDF, run through its logic and decided that to recalculate is cheaper than the lookup.  That may or may not be the case in actuality,  the optimizer has no idea of the real cost of a scalar udf.  IMO the default cost of a scalar UDF should be seen as a lot higher than it is, since they are invariably higher. Knowing this, how do we avoid the function call?  Dropping the guid from the index is not an option, there may be other code reliant on it.   We are left with only one real option,  add the persisted column into the index. drop index Sales.SalesOrderHeader.idxBillgocreate index idxBill on sales.salesorderheader(BillToAddressID) include (rowguid,cleanedguid) Now if we repeat the statement select BillToAddressID,CleanedGuid from sales.salesorderheaderwhere BillToAddressID = 985 We still have a compute scalar operator, but this time it wasnt used to recalculate the persisted data.  This can be confirmed with profiler again. The takeaway here is,  just because you have persisted data dont automatically assumed that it is being used.

    Read the article

  • Social Search: Looking for Love

    - by Mike Stiles
    For marketers and enterprise executives who have placed a higher priority on and allocated bigger budgets to search over social, it might be time to notice yet another shift that’s well underway. Social is search. Search marketing was always more of an internal slam-dunk than other digital initiatives. Even a C-suite that understood little about the new technology world knew it’s a good thing when people are able to find you. Google was the new Yellow Pages. Only with Google, you could get your listing first without naming yourself “AAAA Plumbing.” There were wizards out there who could give your business prominence in front of people who were specifically looking for what you offered. Other search giants like Bing also came along to offer such ideal matchmaking possibilities. But what if the consumer isn’t using a search engine to find what they’re looking for? And what if the search engines started altering their algorithms so that search placement manipulation was more difficult? Both of those things have started to happen. Experian Hitwise’s numbers show that visits to the major search engines in the UK dropped 100 million through August. Search engines are far from dead, or even challenged. But more and more, the public is discovering the sites and brands they need through advice they get via social, not search. You’ll find the worlds of social and search increasingly co-mingling as well. Search behemoths Google and Bing are including Facebook and Google+ into their engines. Meanwhile, Facebook and Twitter have done some integration of global web search into their platforms. So what makes social such a worthwhile search entity for brands? First and foremost, the consumer has demonstrated a behavior of acting on recommendations from social connections. A cry in the wilderness like, “Anybody know any good catering companies?” will usually yield a link (and an endorsement) from a friend such as “Yeah, check out Just-Cheese-Balls Catering.” There’s no such human-driven force/influence behind the big search engines. Facebook’s Mark Zuckerberg and others call it “Friend Mining.” It is, in essence, searching for answers from friends’ experiences as opposed to faceless code. And Facebook has all of those friends’ experiences already stored as data. eMarketer says search in an $18 billion business, and investors are really into it. So no shock Facebook’s ready to leverage their social graph into relevant search. What do you do about all this as a brand? For one thing, it’s going to lead to some interesting paid marketing opportunities around the corner, including Sponsored Stories bought against certain queries, inserting deals into search results, capitalizing on social search results on mobile, etc. Apart from that, it might be time to stop mentally separating social and search in your strategic planning and budgeting. Courting your fans on social will cumulatively add up to more valuable, personally endorsed recommendations for your company when a consumer conducts a search on social. Fail to foster those relationships, fail to engage, fail to provide knock-em-dead customer service, fail to wow them with your actual products and services…and you’ll wind up with the visibility you deserve in social search results.

    Read the article

  • Deploying Oracle ADF Essentials Applications to Glassfish

    - by Shay Shmeltzer
    With the new Oracle ADF Essentials offering you can now deploy applications that leverage Oracle ADF on the open source Glassfish 3.1 server. Deployment is documented in the official JDeveloper and ADF documentation (here) but below is a summary of the steps and a video of the steps you'll need to take to get a basic Oracle ADF Essentials application to work on GlassFish. Note - to make starting/stopping GlassFish easier for my demo I used my GlassFish extension that you can get here. First we'll install some ADF Runtime libraries on GlassFish Download and install Glassfish (Note - if you also have an Oracle DB on the same machine, you'll want to switch GlassFish's HTTP port to something else instead of 8080). Download the Oracle ADF Essentials packaging - this will get you an adf_essentials.zip file. Copy the adf_essentials.zip to the lib directory of your Glassfish domain - on a default windows install this would be: C:\glassfish3\glassfish\domains\domain1\lib Go the the above lib directory and issue a unzip -j adf_essentials.zip This will extract the ADF libraries to the directory. Now you can start the Glassfish server. Now let's configure Glassfish to handle applications of the ADF type: Invoke the admin console of glassfish (http://localhost:4848) and log into your admin account. Go to Configurations->Server-config->JVM Settings and choose the JVM Options tab Add the following entries: -XX:MaxPermSize=512m (note this entry should already exist so just make sure it has a big enough value) -Doracle.mds.cache=simple While we are in the admin console, we can also define JDBC connections that will be used by our application. Go into Resources->JDBC->JDBC Connection Pools and click to create a New one Give it a name and choose the resource type to be javax.sql.XADataSource and choose Oracle as the Database Driver vendor. Click Next Scroll down to the Additional Properties section and start filling in the information for your database. The values for an Oracle XE will be (user=hr, databaseName = XE, Password=hr, ServerName=localhost, DriverType=thin, PortNumber=1521) Click Finish Click Ping to check your connection works. Now define a new JDBC Resource that will use the pool you just defined. In my example I called the resource jdbc/HRDS You will need this name to match the name in your Application Module connection configuraiton.Now you can re-start the Glassfish server for the changes to take effect. Get an ADF application going (you can use the regular Fusion Application template for this) Go into the project properties of your viewController project, under the deployment section click to edit the deployment profile that is defined there. Go to Platform and choose Glassfish 3.1 from the drop down list. Click ok to go back to your project. Go to Application -> Application Properties-> Deployment Go to Platform and choose Glassfish 3.1 from the drop down list. Click ok to go back to your project. This step will make sure that JDeveloper will autoamtically add the necessary ADF libraries to the EAR file that is being generated for deployment on Glassfish  Go to your Application->Deploy and deploy either to an EAR file or directly to a Glassfish server connection that you created. Things should just work, but if they don't then look up the server.log in the log directory and check out what error is in there. Here is a video demo of the various steps: Note - right now the deployment of an ADF application takes about 2 minutes on my machine we are hoping to be able to improve this timing in the future. People who are more familiar with Glassfish might want to explore using exploded directory deployment and see if they can get it to work.

    Read the article

  • MPI Cluster Debugger launch integration in VS2010

    Let's assume that you have all the HPC bits installed and that you have existing MPI code (or you created a "Hello World" project using the MPI project template). Of course, you create a single MPI application and at runtime it will correspond to multiple processes (of the same app) launched on multiple nodes (i.e. machines) on the cluster. So how do you debug such a situation by simply hitting the familiar "F5" keystroke (i.e. Debug - Start Debugging)?WATCH IT INSTEAD OF READING ABOUT ITIf you can't bear to read through all the details below, just watch this 19-minute screencast explaining this VS2010 feature. Alternatively, or even additionally, keep on reading.REQUIREMENTWhen you debug an MPI application, you would want the copying of resources from your client machine (where Visual Studio is installed) to each compute node (where Windows HPC Server is installed) to take place automatically for you. 'Resources' in the previous sentence includes your application binary, plus any binary or data dependencies it may have, plus PDBs if needed, plus the debug CRT of the correct bitness, plus msvsmon for remote debugging to work. You would also want, after copying is complete, to have your app and msvsmon launched and attached so that you can hit breakpoints back in Visual Studio on your client machine. All these thing that you would want are delivered in VS2010.STEPS TO F51. In your MPI project where you have placed a breakpoint go to Project Properties - Configuration Properties - Debugging. Ensure the "Debugger to launch" combo box value is set to MPI Cluster Debugger.2. There are a whole bunch of properties here and typically you can ignore all of them except one: Run Environment. By default it is set to run 1 process on your local machine and if you change the number after that to, for example, 4 it will launch 4 processes of your app on your local machine.You want this to run on your cluster though, so go to the dropdown arrow at the end of the Run Environment cell and open it to expose the "Edit Hpc node" menu which opens the Node Selector dialog:In this dialog you can enter (or pick from a list) the cluster head node name and then the number of processes you want to execute on the cluster and then hit OK and… you are done.3. Press F5 and watch your breakpoint get hit (after giving it some time for copying, remote execution, attachment and symbol resolution to take place).GOING DEEPERIn the MPI Cluster Debugger project properties above, you can see many additional properties to the Run Environment. They are all optional, but you may want to understand them in order to fine tune your cluster debugging. Read all about each one of these on the MSDN page Configuration Properties for the MPI Cluster Debugger.In the Node Selector dialog above you can see more options than just the Head Node name and Number of Process to run. They should be self-explanatory but I also cover them in depth in my screencast showing you an example of why you would choose to schedule processes per core versus per node. You can also read about these options on MSDN as part of the page How to: Configure and Launch the MPI Cluster Debugger.To read through an example that touches on MPI project creation, project properties, node selector, and also usage of MPI with OpenMP plus MPI with PPL, read the MSDN page Walkthrough: Launching the MPI Cluster Debugger in Visual Studio 2010.Happy MPI debugging! Comments about this post welcome at the original blog.

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • ORE graphics using Remote Desktop Protocol

    - by Sherry LaMonica
    Oracle R Enterprise graphics are returned as raster, or bitmap graphics. Raster images consist of tiny squares of color information referred to as pixels that form points of color to create a complete image. Plots that contain raster images render quickly in R and create small, high-quality exported image files in a wide variety of formats. However, it is a known issue that the rendering of raster images can be problematic when creating graphics using a Remote Desktop connection. Raster images do not display in the windows device using Remote Desktop under the default settings. This happens because Remote Desktop restricts the number of colors when connecting to a Windows machine to 16 bits per pixel, and interpolating raster graphics requires many colors, at least 32 bits per pixel.. For example, this simple embedded R image plot will be returned in a raster-based format using a standalone Windows machine:  R> library(ORE) R> ore.connect(user="rquser", sid="orcl", host="localhost", password="rquser", all=TRUE)  R> ore.doEval(function() image(volcano, col=terrain.colors(30))) Here, we first load the ORE packages and connect to the database instance using database login credentials. The ore.doEval function executes the R code within the database embedded R engine and returns the image back to the client R session. Over a Remote Desktop connection under the default settings, this graph will appear blank due to the restricted number of colors. Users who encounter this issue have two options to display ORE graphics over Remote Desktop: either raise Remote Desktop's Color Depth or direct the plot output to an alternate device. Option #1: Raise Remote Desktop Color Depth setting In a Remote Desktop session, all environment variables, including display variables determining Color Depth, are determined by the RCP-Tcp connection settings. For example, users can reduce the Color Depth when connecting over a slow connection. The different settings are 15 bits, 16 bits, 24 bits, or 32 bits per pixel. To raise the Remote Desktop color depth: On the Windows server, launch Remote Desktop Session Host Configuration from the Accessories menu.Under Connections, right click on RDP-Tcp and select Properties.On the Client Settings tab either uncheck LimitMaximum Color Depth or set it to 32 bits per pixel. Click Apply, then OK, log out of the remote session and reconnect.After reconnecting, the Color Depth on the Display tab will be set to 32 bits per pixel.  Raster graphics will now display as expected. For ORE users, the increased color depth results in slightly reduced performance during plot creation, but the graph will be created instead of displaying an empty plot. Option #2: Direct plot output to alternate device Plotting to a non-windows device is a good option if it's not possible to increase Remote Desktop Color Depth, or if performance is degraded when creating the graph. Several device drivers are available for off-screen graphics in R, such as postscript, pdf, and png. On-screen devices include windows, X11 and Cairo. Here we output to the Cairo device to render an on-screen raster graphic.  The grid.raster function in the grid package is analogous to other grid graphical primitives - it draws a raster image within the current plot's grid.  R> options(device = "CairoWin") # use Cairo device for plotting during the session R> library(Cairo) # load Cairo, grid and png libraries  R> library(grid) R> library(png)  R> res <- ore.doEval(function()image(volcano,col=terrain.colors(30))) # create embedded R plot  R> img <- ore.pull(res, graphics = TRUE)$img[[1]] # extract image  R> grid.raster(as.raster(readPNG(img)), interpolate = FALSE) # generate raster graph R> dev.off() # turn off first device   By default, the interpolate argument to grid.raster is TRUE, which means that what is actually drawn by R is a linear interpolation of the pixels in the original image. Setting interpolate to FALSE uses a sample from the pixels in the original image.A list of graphics devices available in R can be found in the Devices help file from the grDevices package: R> help(Devices)

    Read the article

  • My .NET Technology picks for 2011

    - by shiju
    My Technology predictions for 2011 Cloud computing and Mobile application development will be the hottest trends for 2011. I hope that Windows Azure will be very hot in year 2011 and lot of cloud computing adoption will be happen with Windows Azure on 2011. Web application scalability will be the big challenge for Architects in the next year and architecture approaches like CQRS will get some attention on next year. Architects will look on different options for web application scalability and adoption of NoSQL and Document databases will be more in the year 2011. The following are the my technology picks for .Net stack Windows Azure Windows Azure will be one of the hottest technologies of 2011. Adoption of Cloud and Windows Azure will get big attention on next year. The Windows Azure platform is a flexible cloud–computing platform that lets you focus on solving business problems and addressing customer needs. No need to invest upfront on expensive infrastructure. Pay only for what you use, scale up when you need capacity and pull it back when you don’t. We handle all the patches and maintenance — all in a secure environment with over 99.9% uptime. Silverlight 5 Silverlight is becoming a common technology for variety of development platforms. You can develop Silverlight applications for web, desktop and windows phone. The new Silverlight 5 beta will be available during the starting quarter of the next year with new capabilities and lot of new features. Silverlight 5 will be powerful development platform for both web-based business apps and rich media solutions. We can expect final version of Silverlight 5 on end of 2011. Windows Phone 7 Development Tools Mobile application development will be very hot in year 2011 and Windows Phone 7 will be one of the hottest technologies of next year. You can get introduction on Windows Phone 7 Development Tools from somasegar’s blog post and MSDN documentation available from here. EF Code First I am a big fan of Entity Framework’s Code First approach and hope that Code First approach will attract more people onto Entity Framework 4. EF Code First lets you focus on domain model which will enable Domain-Driven Development for applications. I hope that DDD fans will love the EF Code First approach. The Entity Framework 4 now supports three types of approaches and these will attract different types of developer audience. ASP.NET MVC 3 The ASP.NET MVC 3 will be the hottest technology of Microsoft web stack on the next year. ASP.NET developers will widely move to the ASP.NET MVC Framework from their WebForms development. The new Razor view engine is great and it will increase the adoption of ASP.NET MVC 3. Razor the will improve the productivity when working with ASP.NET MVC 3 Views. You can build great web applications using ASP.NET MVC 3 and jQuery with better maintainability, generation of clean HTML and even better performance. In my opinion, the best technology stack for web development is ASP.NET MVC 3 and Entity Framework 4 Code First as ORM. On the next year, you can expect more articles from my blog on ASP.NET MVC 3 and Entity Framework 4 Code First. RavenDB NoSQL and Document databases will get more attention on the coming year and RavenDB will be the most notable document database in the .NET stack. RavenDB is an Open Source (with a commercial option) document database for the .NET/Windows platform developed by Ayende Rahien. RavenDB is .NET focused document database which comes with a fully functional .NET client API and supports LINQ. I have written few articles on RavenDB and you can read it from here. Managed Extensibility Framework (MEF) Many people didn't realized the power of MEF. The MEF lets you create extensible applications and provides a great solution for the runtime extensibility problem. I hope that .NET developers will more adopt the MEF on the next year for their .NET applications. You can get an excellent introduction on MEF from Anoop Madhusudanan’s blog post MEF or Managed Extensibility Framework – Creating a Zoo and Animals

    Read the article

  • Microsoft Certifications &ndash; how to prep? and why?

    - by Kelly Jones
    I often get asked by my colleagues, “how do you prepare for Microsoft exams?” Well, the answer for me is a little complicated, so I thought I’d write up here what I do. The first thing I do is go to Microsoft’s website to find the exam that I need to take.  If you’re looking to get a particular certification, then their site lists the exam or exams that you’ll need to pass.  If you’ve already taken an exam, you can log onto the MCP website and use their certification planner.  This little tool tells you what tests you need, based on the exams you’ve already passed.  It is very helpful with the certifications that are multiple tests and especially ones that have electives. Once you’ve identified the test, you can use Microsoft’s website to see the topics that it covers.  This is a good outline to follow when you study.  I’ll keep this handy to reference back throughout my studying to make sure that I’m covering all the topics I need to know. The next step is probably where I am a little different from others.  IF the exam outline covers material that I’ve already been working with, then I’ll skip a lot of the studying and go directly to the practice tests.  However, if I’m looking at the outline and wondering how in the world do you do that? – then it’s time to hit the books. So, where to find study materials?  Try typing in the exam number into any search engine.  You’ll typically find a ton of resources.  If you’re lucky, you’ll find books that others recommend based on their studying and exam experience.  As a Sogeti employee, I have access to three really good resources: an internal company list of all of the consultants who have passed particular tests (on our Connex website), Books 24x7, and Transcender practice exams. Once my studying is done (either through books or experience), I’ll go through the practice exams.  I find them really helpful in getting my knowledge lined up to the thinking process that the exam writers use.  If I’m relying on my experience, then this really helps me to identify gaps in my knowledge that I’ll need to fill. That’s about it.  If I’m doing ok on the practice exams, then I’ll take the real thing.  I’ve found that the practice exams are usually more difficult than then real thing. Oh – one other thing I do related to Microsoft exams – I try to take any beta exams that Microsoft makes available that fall into my skill set.  Microsoft has started a blog to announce these and the seats usually fill up really quick.  The blog is at http://blogs.technet.com/betaexams/ . You don’t get your results instantly, like a normal exam, instead you have to wait for everyone to finish taking the beta exams and for Microsoft to determine which questions they are using and which they are dropping.  So, be prepared to wait six to eight weeks for your results.

    Read the article

  • Please Help - PHP Form, when no text is entered [migrated]

    - by Joe Turner
    I'm creating a mobile landing page and I have also created a form that allows me to create more, by duplicating a folder that's host to a template file. The script then takes you to a page where you input the company details one by one and press submit. Then the page is created. My problem is, when a field is left out (YouTube for instance), the button is created and is blank. I would like there to be a default text for when there is no text. I've tried a few things and have been struggling to make this work for DAYS! <?php $company = $_POST["company"]; $phone = $_POST["phone"]; $colour = $_POST["colour"]; $email = $_POST["email"]; $website = $_POST["website"]; $video = $_POST["video"]; ?> <div id="contact-area"> <form method="post" action="generate.php"><br> <input type="text" name="company" placeholder="Company Name" /><br> <input type="text" name="slogan" placeholder="Slogan" /><br> <input class="color {required:false}" name="colour" placeholder="Company Colour"><br> <input type="text" name="phone" placeholder="Phone Number" /><br> <input type="text" name="email" placeholder="Email Address" /><br> <input type="text" name="website" placeholder="Full Website - Include http://" /><br> <input type="text" name="video" placeholder="Video URL" /><br> <input type="submit" value="Generate QuickLinks" style="background:url(images/submit.png) repeat-x; color:#FFF"/> </form> That's the form. It takes the variables and post's them to the file below. <?php $File = "includes/details.php"; $Handle = fopen($File, 'w'); ?> <?php $File = "includes/details.php"; $Handle = fopen($File, 'w'); $Data = "<div id='logo'> <h1 style='color:#$_POST[colour]'>$_POST[company]</h1> <h2>$_POST[slogan]</h2> </div> <ul data-role='listview' data-inset='true' data-theme='b'> <li style='background-color:#$_POST[colour]'><a href='tel:$_POST[phone]'>Phone Us</a></li> <li style='background-color:#$_POST[colour]'><a href='mailto:$_POST[email]'>Email Us</a></li> <li style='background-color:#$_POST[colour]'><a href='$_POST[website]'>View Full Website</a></li> <li style='background-color:#$_POST[colour]'><a href='$_POST[video]'>Watch Us</a></li> </ul> \n"; fwrite($Handle, $Data); fclose($Handle); ?> and there is what the form turns into. I need there to be a default link put in incase the field is left blank, witch it is sometimes. Thanks in advance guys.

    Read the article

< Previous Page | 552 553 554 555 556 557 558 559 560 561 562 563  | Next Page >