Search Results

Search found 20931 results on 838 pages for 'mysql insert'.

Page 791/838 | < Previous Page | 787 788 789 790 791 792 793 794 795 796 797 798  | Next Page >

  • Big Data: Size isn’t everything

    - by Simon Elliston Ball
    Big Data has a big problem; it’s the word “Big”. These days, a quick Google search will uncover terabytes of negative opinion about the futility of relying on huge volumes of data to produce magical, meaningful insight. There are also many clichéd but correct assertions about the difficulties of correlation versus causation, in massive data sets. In reading some of these pieces, I begin to understand how climatologists must feel when people complain ironically about “global warming” during snowfall. Big Data has a name problem. There is a lot more to it than size. Shape, Speed, and…err…Veracity are also key elements (now I understand why Gartner and the gang went with V’s instead of S’s). The need to handle data of different shapes (Variety) is not new. Data developers have always had to mold strange-shaped data into our reporting systems, integrating with semi-structured sources, and even straying into full-text searching. However, what we lacked was an easy way to add semi-structured and unstructured data to our arsenal. New “Big Data” tools such as MongoDB, and other NoSQL (Not Only SQL) databases, or a graph database like Neo4J, fill this gap. Still, to many, they simply introduce noise to the clean signal that is their sensibly normalized data structures. What about speed (Velocity)? It’s not just high frequency trading that generates data faster than a single system can handle. Many other applications need to make trade-offs that traditional databases won’t, in order to cope with high data insert speeds, or to extract quickly the required information from data streams. Unfortunately, many people equate Big Data with the Hadoop platform, whose batch driven queries and job processing queues have little to do with “velocity”. StreamInsight, Esper and Tibco BusinessEvents are examples of Big Data tools designed to handle high-velocity data streams. Again, the name doesn’t do the discipline of Big Data any favors. Ultimately, though, does analyzing fast moving data produce insights as useful as the ones we get through a more considered approach, enabled by traditional BI? Finally, we have Veracity and Value. In many ways, these additions to the classic Volume, Velocity and Variety trio acknowledge the criticism that without high-quality data and genuinely valuable outputs then data, big or otherwise, is worthless. As a discipline, Big Data has recognized this, and data quality and cleaning tools are starting to appear to support it. Rather than simply decrying the irrelevance of Volume, we need as a profession to focus how to improve Veracity and Value. Perhaps we should just declare the ‘Big’ silent, embrace these new data tools and help develop better practices for their use, just as we did the good old RDBMS? What does Big Data mean to you? Which V gives your business the most pain, or the most value? Do you see these new tools as a useful addition to the BI toolbox, or are they just enabling a dangerous trend to find ghosts in the noise?

    Read the article

  • Time to stop using &ldquo;Execute Package Task&rdquo;&ndash; a way to execute package in SSIS catalog taking advantage of the new project deployment model ,and the logging and reporting feature

    - by Kevin Shyr
    I set out to find a way to dynamically call package in SSIS 2012.  The following are 2 excellent blogs I found; I used them heavily.  The code below has some addition to parameter types and message types, but was made essentially derived entirely from the blogs. http://sqlblog.com/blogs/jamie_thomson/archive/2011/07/16/ssis-logging-in-denali.aspx http://www.ssistalk.com/2012/07/24/quick-tip-run-ssis-2012-packages-synchronously-and-other-execution-options/   The code: Every package will be called by a PackageController package.  The packageController is initialized with some information on which package to run and what information to pass in.   The following is the stored procedure called from the “Execute SQL Task”.  Here is the highlight of the stored procedure It takes in packageName, project name, and folder name (folder in SSIS project deployment to SSIS catalog) The stored procedure sets the package variables of the upcoming package execution Execute package in SSIS Catalog Get the status of the execution.  Also, if exists, get the error message’s message_id and store them in the management database. Return value to “Execute SQL Task” to manage failure properly CREATE PROCEDURE [AUDIT].[LaunchPackageExecutionInSSISCatalog]        @PackageName NVARCHAR(255)        , @ProjectFolder NVARCHAR(255)        , @ProjectName NVARCHAR(255)        , @AuditKey INT        , @DisableNotification BIT        , @PackageExecutionLogID INT AS BEGIN TRY        DECLARE @execution_id BIGINT = 0;        -- Create a package execution        EXEC [SSISDB].[catalog].[create_execution]                     @package_name=@PackageName,                     @execution_id=@execution_id OUTPUT,                     @folder_name=@ProjectFolder,                     @project_name=@ProjectName,                     @use32bitruntime=False;          UPDATE [AUDIT].[PackageInstanceExecutionLog] WITH(ROWLOCK)        SET [SSISCatalogExecutionID] = @execution_id        WHERE [PackageInstanceExecutionLogID] = @PackageExecutionLogID          -- this is to set the execution synchronized so that I can check the result in the end        EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=50,                     @parameter_name=N'SYNCHRONIZED',                     @parameter_value=1; -- true          /********************************************************         ********************************************************              Section: setting parameters                     Source table:  SSISDB.internal.object_parameters              object_type list:                     20: project level variables                     30: package level variables                     50: execution parameter         ********************************************************         ********************************************************/        EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=30,                     @parameter_name=N'FromParent_AuditKey',                     @parameter_value=@AuditKey; -- true          EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=30,                     @parameter_name=N'FromParent_DisableNotification',                     @parameter_value=@DisableNotification; -- true          EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=30,                     @parameter_name=N'FromParent_PackageInstanceExecutionID',                     @parameter_value=@PackageExecutionLogID; -- true        /********************************************************         ********************************************************              Section: setting variables END         ********************************************************         ********************************************************/            /* This section is carried over from example code           I don't see a reason to change them yet        */        -- Set our package parameters        EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=50,                     @parameter_name=N'DUMP_ON_EVENT',                     @parameter_value=1; -- true          EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=50,                     @parameter_name=N'DUMP_EVENT_CODE',                     @parameter_value=N'0x80040E4D;0x80004005';          EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=50,                     @parameter_name=N'LOGGING_LEVEL',                     @parameter_value= 1; -- Basic          EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=50,                     @parameter_name=N'DUMP_ON_ERROR',                     @parameter_value=1; -- true                              /********************************************************         ********************************************************              Section: EXECUTING         ********************************************************         ********************************************************/        EXEC [SSISDB].[catalog].[start_execution]                     @execution_id;        /********************************************************         ********************************************************              Section: EXECUTING END         ********************************************************         ********************************************************/            /********************************************************         ********************************************************              Section: checking execution result                     Source table:  [SSISDB].[catalog].[executions]              status:                     1: created                     2: running                     3: cancelled                     4: failed                     5: pending                     6: ended unexpectedly                     7: succeeded                     8: stopping                     9: completed         ********************************************************         ********************************************************/        if EXISTS(SELECT TOP 1 1                            FROM [SSISDB].[catalog].[executions] WITH(NOLOCK)                            WHERE [execution_id] = @execution_id                                  AND [status] NOT IN (2, 7, 9)) BEGIN                /********************************************************               ********************************************************                     Section: logging error messages                            Source table:  [SSISDB].[internal].[operation_messages]                     message type:                            10:  OnPreValidate                             20:  OnPostValidate                             30:  OnPreExecute                             40:  OnPostExecute                             60:  OnProgress                             70:  OnInformation                             90:  Diagnostic                             110:  OnWarning                            120:  OnError                            130:  Failure                            140:  DiagnosticEx                             200:  Custom events                             400:  OnPipeline                     message source type:                            10:  Messages logged by the entry APIs (e.g. T-SQL, CLR Stored procedures)                             20:  Messages logged by the external process used to run package (ISServerExec)                             30:  Messages logged by the package-level objects                             40:  Messages logged by tasks in the control flow                             50:  Messages logged by containers (For, ForEach, Sequence) in the control flow                             60:  Messages logged by the Data Flow Task                                    ********************************************************               ********************************************************/                INSERT INTO AUDIT.PackageInstanceExecutionOperationErrorLink                     SELECT @PackageExecutionLogID                                  ,[operation_message_id]                            FROM [SSISDB].[internal].[operation_messages] WITH(NOLOCK)                            WHERE operation_id = @execution_id                                  AND message_type IN (120, 130)                           EXEC [AUDIT].[FailPackageInstanceExecution] @PackageExecutionLogID, 'SSISDB Internal operation_messages found'                GOTO ReturnTrueAsErrorFlag                /********************************************************               ********************************************************                     Section: checking messages END               ********************************************************               ********************************************************/                /* This part is not really working, so now using rowcount to pass status              --DECLARE @PackageErrorMessage NVARCHAR(4000)              --SET @PackageErrorMessage = @PackageName + 'failed with executionID: ' + CONVERT(VARCHAR(20), @execution_id)                --RAISERROR (@PackageErrorMessage -- Message text.              --     , 18 -- Severity,              --     , 1 -- State,              --     , N'check table AUDIT.PackageInstanceExecutionErrorMessages' -- First argument.              --     );              */        END        ELSE BEGIN              GOTO ReturnFalseAsErrorFlagToSignalSuccess        END        /********************************************************         ********************************************************              Section: checking execution result END         ********************************************************         ********************************************************/ END TRY BEGIN CATCH        DECLARE @SSISCatalogCallError NVARCHAR(MAX)        SELECT @SSISCatalogCallError = ERROR_MESSAGE()          EXEC [AUDIT].[FailPackageInstanceExecution] @PackageExecutionLogID, @SSISCatalogCallError          GOTO ReturnTrueAsErrorFlag END CATCH;     /********************************************************  ********************************************************    Section: end result  ********************************************************  ********************************************************/ ReturnTrueAsErrorFlag:        SELECT CONVERT(BIT, 1) AS PackageExecutionErrorExists ReturnFalseAsErrorFlagToSignalSuccess:        SELECT CONVERT(BIT, 0) AS PackageExecutionErrorExists   GO

    Read the article

  • Hello World Pagelet

    - by astemkov
    Introduction The goal of this exercise is to give you a basic feel of how you can use Pagelet Producer to proxy a web page We will proxy a simple static Hello World web page, cut one section out of that page and present it as a pagelet that you can later insert on your own application page or to your portal page such as WebCenter Portal space or WebCenter Interaction community page. Hello World sample app This is the static web page we will work with: Let's assume the following: The Hello World web page is running on server http://appserver.company.com:1234/ The Hello World web page path is: http://appserver.company.com:1234/helloworld/ Initial Pagelet Producer setup Let's assume that the Pagelet Producer server is running on http://pageletserver.company.com:8889/pagelets/ First let's check that Pagelet Producer is up and running. In order to do that we just need to access the following URL: http://pageletserver.company.com:8889/pagelets/ And this is what should be returned: Now you can access Pagelet Producer administration screens using this URL: http://pageletserver.company.com:8889/pagelets/admin This is how the UI looks: Now if you connect to the internet via proxy server, you need to configure proxy in Pagelet Producer settings. In the Navigator pane: Jump To - Settings Click on "Proxy" Enter your proxy server configuration: Creating a resource First thing that you need to do is to create a resource for your web page. This will tell Pagelet Producer that all sub-paths of the web page should be proxied. It also will allow you to setup common rules of how your web page should be proxied and will serve as a container for your pagelets. In the Navigator pane: Jump To - Resources Click on any existing resource (ex. welcome_resource) Click on "Create selected type" toolbar button at the top of the Navigator pane Select "Web" in the "Select Producer Type" dialog box and click "OK" Now after the resource is created let's click on "General" sub-item a specify the following values Name = AppServer Source URL = http://appserver.company.com:1234/ Destination URL = /appserver/ Click on "Save" toolbar button at the top of the Navigator pane After the resource is created our web page becomes accessible by the URL: http://pageletserver.company.com:8889/pagelets/appserver/helloworld/ So in original web page address Source URL is replaced with Pagelet Producer URL (http://pageletserver.company.com:8889/pagelets) + Destination URL Creating a pagelet Now let's create "Hello World" pagelet. Under the resource node activate Pagelets subnode Click on "Create selected type" toolbar button at the top of the Navigator pane Click on "General" sub-node of newly created pagelet and specify the following values Name = Hello_World Library = MyLib Library is used for logical grouping. The portals use the "Library" value to group pagelets in their respective UI's. For example, when adding pagelets to a WebCenter Portal space you would see the individual pagelets listed under the "Library" name. URL Suffix = helloworld/index.html this is where the Hello World page html is served from Click on "Save" toolbar button at the top of the Navigator pane The Library name can be anything you want, it doesn't have to match the resource name at all. It is used as a logical grouping of pagelets, and you can include pagelets from multiple resources into the same library or create a new library for each pagelet. After you save the pagelet you can access it here: http://pageletserver.company.com:8889/pagelets/inject/v2/pagelet/MyLib/Hello_World which is : http://pageletserver.company.com:8889/pagelets/inject/v2/pagelet/ + [Library] + [Name] Or to test the injection of a pagelet into iframe you can click on the pagelets "Documentation" sub-node and use "Access Pagelet using REST" URL: This is what we will see: Clipping The pagelet that we just created covers the whole web page, but we want just the "Hello World" segment of it. So let's clip it. Under the Hello_World pagelet node activate Clipper sub-node Click on "Create selected type" toolbar button at the top of the Navigator pane Specify a Name for newly created clipper. For example: "c1" Click on "Content" sub-node of the clipper Click on "Launch Clipper" button New browser window will open By moving a mouse pointer over the web page select the area you want to clip: Click left mouse button - the browser window will disappear and you will see that Clipping Path was automatically generated Now let's save and access the link from the "Documentation" page again Here's our pagelet nicely clipped and ready for being used on your Web Center Space

    Read the article

  • Why Haven’t NFC Payments Taken Off?

    - by David Dorf
    With the EMV 2015 milestone approaching rapidly, there’s been renewed interest in smartcards, those credit cards with an embedded computer chip.  Back in 1996 I was working for a vendor helping Visa introduce a stored-value smartcard to the US.  Visa Cash was debuted at the 1996 Olympics in Atlanta, and I firmly believed it was the beginning of a cashless society.  (I later worked on MasterCard’s system called Mondex, from the UK, which debuted the following year in Manhattan). But since you don’t have a Visa Cash card in your wallet, it’s obvious the project never took off.  It was convenient for consumers, faster for merchants, and more cost-effective for banks, so why did it fail?  All emerging payment systems suffer from the chicken-and-egg dilemma.  Consumers won’t carry the cards if few merchants accept them, and merchants won’t install the terminals if few consumers have cards. Today’s emerging payment providers are in a similar pickle.  There has to be enough value for all three constituents – consumers, merchants, banks – to change the status quo.  And it’s not enough to exceed the value, it’s got to be a leap in value, because people generally resist change.  ATMs and transit cards are great examples of this, and airline kiosks and self-checkout systems are to a lesser extent. Although Google Wallet and ISIS, the two leading NFC payment platforms in the US, have shown strong commitment, there’s been very little traction.  Yes, I can load my credit card number into my phone then tap to pay, but what was the incremental value over swiping my old card?  For it to be a leap in value, it has to offer more than just payment, which I can do very easily today.  The other two ingredients are thought to be loyalty programs and digital coupons, but neither Google nor ISIS really did them well. Of course a large portion of the mobile phone market doesn’t even support NFC thanks to Apple, and since it’s not in their best interest that situation is unlikely to change.  Another issue is getting access to the “secure element,” the chip inside the phone where accounts numbers can be held securely.  Telco providers and handset manufacturers own that area, and they’re not willing to share with banks.  (Host Card Emulation, which has been endorsed by MasterCard and Visa, might be a solution.) Square recently gave up on its wallet, and MCX (the group of retailers trying to create a mobile payment platform) is very slow out of the gate.  That leaves PayPal and a slew of smaller companies trying to introduce easier ways to pay. But is it really so cumbersome to carry and swipe (soon to insert) a credit card?  Aren’t there more important problems to solve in the retail customer experience?  Maybe Apple will come up with some novel way to use iBeacons and fingerprint identification to make payments, but for now I think we need to focus on upgrading to Chip-and-PIN and tightening security.  In the meantime, NFC payments will continue to struggle.

    Read the article

  • SQL SERVER – How to Ignore Columnstore Index Usage in Query

    - by pinaldave
    Earlier I wrote about SQL SERVER – Fundamentals of Columnstore Index and very first question I received in email was as following. “We are using SQL Server 2012 CTP3 and so far so good. In our data warehouse solution we have created 1 non-clustered columnstore index on our large fact table. We have very unique situation but your article did not cover it. We are running few queries on our fact table which is working very efficiently but there is one query which earlier was running very fine but after creating this non-clustered columnstore index this query is running very slow. We dropped the columnstore index and suddenly this one query is running fast but other queries which were benefited by this columnstore index it is running slow. Any workaround in this situation?” In summary the question in simple words “How can we ignore using columnstore index in selective queries?” Very interesting question – you can use I can understand there may be the cases when columnstore index is not ideal and needs to be ignored the same. You can use the query hint IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX to ignore the columnstore index. SQL Server Engine will use any other index which is best after ignoring the columnstore index. Here is the quick script to prove the same. We will first create sample database and then create columnstore index on the same. Once columnstore index is created we will write simple query. This query will use columnstore index. We will then show the usage of the query hint. USE AdventureWorks GO -- Create New Table CREATE TABLE [dbo].[MySalesOrderDetail]( [SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [CarrierTrackingNumber] [nvarchar](25) NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, [UnitPriceDiscount] [money] NOT NULL, [LineTotal] [numeric](38, 6) NOT NULL, [rowguid] [uniqueidentifier] NOT NULL, [ModifiedDate] [datetime] NOT NULL ) ON [PRIMARY] GO -- Create clustered index CREATE CLUSTERED INDEX [CL_MySalesOrderDetail] ON [dbo].[MySalesOrderDetail] ( [SalesOrderDetailID]) GO -- Create Sample Data Table -- WARNING: This Query may run upto 2-10 minutes based on your systems resources INSERT INTO [dbo].[MySalesOrderDetail] SELECT S1.* FROM Sales.SalesOrderDetail S1 GO 100 -- Create ColumnStore Index CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_MySalesOrderDetail_ColumnStore] ON [MySalesOrderDetail] (UnitPrice, OrderQty, ProductID) GO Now we have created columnstore index so if we run following query it will use for sure the same index. -- Select Table with regular Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO We can specify Query Hint IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX as described in following query and it will not use columnstore index. -- Select Table with regular Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID OPTION (IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX) GO Let us clean up the database. -- Cleanup DROP INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] GO TRUNCATE TABLE dbo.MySalesOrderDetail GO DROP TABLE dbo.MySalesOrderDetail GO Again, make sure that you use hint sparingly and understanding the proper implication of the same. Make sure that you test it with and without hint and select the best option after review of your administrator. Here is the question for you – have you started to use SQL Server 2012 for your validation and development (not on production)? It will be interesting to know the answer. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Xorg does not see my monitor EDID

    - by sean farley
    Below is the output from my Xorg.0. X.Org X Server 1.11.3 Release Date: 2011-12-16 [ 22.311] X Protocol Version 11, Revision 0 [ 22.311] Build Operating System: Linux 2.6.42-23-generic x86_64 Ubuntu [ 22.311] Current Operating System: Linux sean-P55-USB3 3.2.0-34-generic #53-Ubuntu SMP Thu Nov 15 10:48:16 UTC 2012 x86_64 [ 22.311] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.2.0-34-generic root=UUID=0a34603e-aee9-44d1-8982-a5a5a38c3e4d ro quiet splash [ 22.311] Build Date: 29 August 2012 12:12:33AM [ 22.311] xorg-server 2:1.11.4-0ubuntu10.8 (For technical support please see http://www.ubuntu.com/support) [ 22.311] Current version of pixman: 0.24.4 [ 22.311] Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. [ 22.311] Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. [ 22.311] (==) Log file: "/var/log/Xorg.0.log", Time: Sat Nov 17 13:20:45 2012 [ 22.311] (==) Using config file: "/etc/X11/xorg.conf" [ 22.311] (==) Using system config directory "/usr/share/X11/xorg.conf.d" [ 22.311] (==) No Layout section. Using the first Screen section. [ 22.311] (==) No screen section available. Using defaults. [ 22.311] (**) |-->Screen "Default Screen Section" (0) [ 22.311] (**) | |-->Monitor "<default monitor>" [ 22.311] (==) No device specified for screen "Default Screen Section". Using the first device section listed. [ 22.311] (**) | |-->Device "Default Device" [ 22.311] (==) No monitor specified for screen "Default Screen Section". Using a default monitor configuration. [ 22.311] (==) Automatically adding devices I have searched all over, and followed lots of dead ends and unanswered questions on this issue. I need to get this monitor recognised so I can use the native resolution of 1600x1200. The Nvidia driver in Windows has no problem with this. The monitor is an old Iiyama HM204DT A. Is there a way of configuring Xorg manually to get these working? I have tried xrandr but this will not work. Output:- sean@sean-P55-USB3:~$ xrandr Screen 0: minimum 8 x 8, current 1152 x 864, maximum 16384 x 16384 DVI-I-0 connected 1152x864+0+0 (normal left inverted right x axis y axis) 0mm x 0mm 1024x768 60.0 + 1360x768 60.0 59.8 1152x864 60.0* 800x600 72.2 60.3 56.2 680x384 119.9 119.6 640x480 59.9 512x384 120.0 400x300 144.4 320x240 120.1 DVI-I-1 disconnected (normal left inverted right x axis y axis) DVI-I-2 disconnected (normal left inverted right x axis y axis) HDMI-0 disconnected (normal left inverted right x axis y axis) DVI-I-3 disconnected (normal left inverted right x axis y axis) Tried Nvidia Xorg.config: sean@sean-P55-USB3:~$ sudo nvidia-xconfig [sudo] password for sean: Using X configuration file: "/etc/X11/xorg.conf". VALIDATION ERROR: Data incomplete in file /etc/X11/xorg.conf. Device section "Default Device" must have a Driver line. Backed up file '/etc/X11/xorg.conf' as '/etc/X11/xorg.conf.nvidia-xconfig-original' Backed up file '/etc/X11/xorg.conf' as '/etc/X11/xorg.conf.backup' New X configuration file written to '/etc/X11/xorg.conf' How do I insert a driver line? This is a bit of a pain as I want to use my Vectorworks cad program in a WinXP Vbox at 1600x1200 but all virtual drives are restricted to the host screen resolution. Do i need to manually create EIDI info in Xorg? I am slightly confused about how Xorg and Nvidia relate Please help

    Read the article

  • From NaN to Infinity...and Beyond!

    - by Tony Davis
    It is hard to believe that it was once possible to corrupt a SQL Server Database by storing perfectly normal data values into a table; but it is true. In SQL Server 2000 and before, one could inadvertently load invalid data values into certain data types via RPC calls or bulk insert methods rather than DML. In the particular case of the FLOAT data type, this meant that common 'special values' for this type, namely NaN (not-a-number) and +/- infinity, could be quite happily plugged into the database from an application and stored as 'out-of-range' values. This was like a time-bomb. When one then tried to query this data; the values were unsupported and so data pages containing them were flagged as being corrupt. Any query that needed to read a column containing the special value could fail or return unpredictable results. Microsoft even had to issue a hotfix to deal with failures in the automatic recovery process, caused by the presence of these NaN values, which rendered the whole database inaccessible! This problem is history for those of us on more current versions of SQL Server, but its ghost still haunts us. Recently, for example, a developer on Red Gate’s SQL Response team reported a strange problem when attempting to load historical monitoring data into a SQL Server 2005 database via the C# ADO.NET provider. The ratios used in some of their reporting calculations occasionally threw out NaN or infinity values, and the subsequent attempts to load these values resulted in a nasty error. It turns out to be a different manifestation of the same problem. SQL Server 2005 still does not fully support the IEEE 754 standard for floating point numbers, in that the FLOAT data type still cannot handle NaN or infinity values. Instead, they just added validation checks that prevent the 'invalid' values from being loaded in the first place. For people migrating from SQL Server 2000 databases that contained out-of-range FLOAT (or DATETIME etc.) data, to SQL Server 2005, Microsoft have added to the latter's version of the DBCC CHECKDB (or CHECKTABLE) command a DATA_PURITY clause. When enabled, this will seek out the corrupt data, but won’t fix it. You have to do this yourself in what can often be a slow, painful manual process. Our development team, after a quizzical shrug of the shoulders, simply decided to represent NaN and infinity values as NULL, and move on, accepting the minor inconvenience of not being able to tell them apart. However, what of scientific, engineering and other applications that really would like the luxury of being able to both store and access these perfectly-reasonable floating point data values? The sticking point seems to be the stipulation in the IEEE 754 standard that, when NaN is compared to any other value including itself, the answer is "unequal" (i.e. FALSE). This is clearly different from normal number comparisons and has repercussions for such things as indexing operations. Even so, this hardly applies to infinity values, which are single definite values. In fact, there is some encouraging talk in the Connect note on this issue that they might be supported 'in the SQL Server 2008 timeframe'. If didn't happen; SQL 2008 doesn't support NaN or infinity values, though one could be forgiven for thinking otherwise, based on the MSDN documentation for the FLOAT type, which states that "The behavior of float and real follows the IEEE 754 specification on approximate numeric data types". However, the truth is revealed in the XPath documentation, which states that "…float (53) is not exactly IEEE 754. For example, neither NaN (Not-a-Number) nor infinity is used…". Is it really so hard to fix this problem the right way, and properly support in SQL Server the IEEE 754 standard for the floating point data type, NaNs, infinities and all? Oracle seems to have managed it quite nicely with its BINARY_FLOAT and BINARY_DOUBLE types, so it is technically possible. We have an enterprise-class database that is marketed as being part of an 'integrated' Windows platform. Absurdly, we have .NET and XPath libraries that fully support the standard for floating point numbers, and we can't even properly store these values, let alone query them, in the SQL Server database! Cheers, Tony.

    Read the article

  • Le Logiciel Libre – Omniprésent dans le secteur public

    - by gravax
    NOTE : Cet article a servi de base à du contenu publié en Juin 2011 dans le magazine Acteurs Publics. Créé il y a plusieurs décennies déjà, pour répondre à un besoin de partage de savoir, et de compétences, le Logiciel Libre existe sous plusieurs appellations, à l'origine anglo-saxonnes, dont « Free Software » et « Open Source » sont les plus utilisées. En Anglais, le mot « Free » pouvant signifier à la fois libre et gratuit, cela a créé une certaine confusion qui n'existe pas en Français avec le mot « libre ». Du coup, on voit souvent l’acronyme FOSS ou FLOSS, pour « Free, Libre, Open Source Software » afin d'éliminer l’ambiguïté. De nos jours, dans le secteur public, le logiciel libre est, depuis, devenu omniprésent. Il répond à plusieurs besoins critiques dont le contrôle des coûts, le choix (de partenaire, de logiciel, de fonctionnalités), la liberté de pouvoir modifier les applications pour les adapter à ses propres besoins, la sécurité provenant du fait que de nombreux développeurs et utilisateurs ont pu contrôler la qualité du code. Un autre aspect très présent dans les logiciels libres et l'adhérence quasi-systématique aux standards de l'industrie, qui garantit une intégration simple et facile au système d'information existant. Il y a cependant des éléments à prendre en compte lors des choix de logiciels libres stratégiques. Si l'aspect coûts est clairement un élément de choix qui peut conduire aux logiciels libres, il est principalement dû au fait qu'un logiciel libre existe souvent en version gratuite, librement téléchargeable. Mais ceci n'est que le le sommet de l'iceberg. Lors de la mise en production de logiciels il va falloir s'entourer de services dont l'intégration, où les possibilités de choix d'un partenaire seront d'autant plus grandes que le logiciel choisi est populaire et connu, ce qui conduira à des coups tirés vers le bas grâce à une concurrence saine. Mais il faudra aussi prévoir le support technique. La encore, la popularité du logiciel choisi augmentera la palette de prestataires de support possible. Le choix devra se faire suivant des critères très solides, et en particulier la capacité à s'engager sur des niveaux de service, la disponibilité 24 heures sur 24, 7 jours sur 7 (le pays ne s’arrête pas de fonctionner le week-end ou la nuit), et, éventuellement, la couverture géographique correspondant aux métiers que l'on exerce (un pays comme la France couvrant avec ses DOM et ses TOM une grande partie des fuseaux horaires et zones géographiques de la planète). La plus part des services publics, que ce soit éducation, santé, ou gouvernement, utilisent déjà des logiciels libres. On les retrouve coté infrastructure, avec des produits comme la base de données MySQL, fortement appréciée dans le monde de l'éducation pour construire des plate-formes d'e-éducation en conjonction avec d'autres produits libres tels Moodle, ou GlassFish, le serveur d'applications très prisé des développeurs pour son adhérence au standard Java EE version 6 et sa simplicité de mise-en-œuvre. Linux est extrêmement présent comme système d'exploitation libre dans le datacenter, mais aussi sur le poste de travail. On retrouve des outils de virtualisation tels Oracle VM, issu de Xen, dans le datacenter, et VirtualBox sur le poste du développeur. Avec une telle palette de solutions et d'outils dans le monde du Logiciel libre, Oracle se apporte au secteur public des réponses ciblées, efficaces, aux besoins du marché, y compris en matière de support technique et qualité de service associée.

    Read the article

  • SQL SERVER – master Database Log File Grew Too Big

    - by pinaldave
    Couple of the days ago, I received following email and I find this email very interesting and I feel like sharing with all of you. Note: Please read the whole email before providing your suggestions. “Hi Pinal, If you can share these details on your blog, it will help many. We understand the value of the master database and we take its regular back up (everyday midnight). Yesterday we noticed that our master database log file has grown very large. This is very first time that we have encountered such an issue. The master database is in simple recovery mode; so we assumed that it will never grow big; however, we now have a big log file. We ran the following command USE [master] GO DBCC SHRINKFILE (N'mastlog' , 0, TRUNCATEONLY) GO We know this command will break the chains of LSN but as per our understanding; it should not matter as we are in simple recovery model.     After running this, the log file becomes very small. Just to be cautious, we took full backup of the master database right away. We totally understand that this is not the normal practice; so if you are going to tell us the same, we are aware of it. However, here is the question for you? What operation in master database would have caused our log file to grow too large? Thanks, [name and company name removed as per request]“ Here was my response to them: “Hi [name removed], It is great that you are aware of all the right steps and method. Taking full backup when you are not sure is always a good practice. Regarding your question what could have caused your master database log to grow larger, let me try to guess what could have happened. Do you have any user table in the master database? If yes, this is not recommended and also NOT a good practice. If have user tables in master database and you are doing any long operation (may be lots of insert, update, delete or rebuilding them), then it can cause this situation. You have made me curious about your scenario; do revert back. Kind Regards, Pinal” Within few minutes I received reply: “That was it Pinal. We had one of the maintenance task log tables created in the master table, which had many long transactions during the night. We moved it to newly created database named ‘maintenance’, and we will keep you updated.” I was very glad to receive the email. I do not suggest that any user table should be created in the master database. It should be left alone from user objects. Now here is the question for you – can you think of any other reason for master log file growth? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Integrate Your Data with Skyvia – Cloud ETL Solution

    - by Pinal Dave
    In our days data integration often becomes a key aspect of business success. For business analysts it’s very important to get integrated data from various sources, such as relational databases, cloud CRMs, etc. to make correct and successful decisions. There are various data integration solutions on market, and today I will tell about one of them – Skyvia. Skyvia is a cloud data integration service, which allows integrating data in cloud CRMs and different relational databases. It is a completely online solution and does not require anything except for a browser. Skyvia provides powerful etl tools for data import, export, replication, and synchronization for SQL Server and other databases and cloud CRMs. You can use Skyvia data import tools to load data from various sources to SQL Server (and SQL Azure). Skyvia supports such cloud CRMs as Salesforce and Microsoft Dynamics CRM and such databases as MySQL and PostgreSQL. You even can migrate data from SQL Server to SQL Server, or from SQL Server to other databases and cloud CRMs. Additionally Skyvia supports import of CSV files, either uploaded manually or stored on cloud file storage services, such as Dropbox, Box, Google Drive, or FTP servers. When data import is not enough, Skyvia offers bidirectional data synchronization. With this tool, you can synchronize SQL Server data with other databases and cloud CRMs. After performing the first synchronization, Skyvia tracks data changes in the synchronized data storages. In SQL Server databases (and other relational databases) it creates additional tracking tables and triggers. This allows synchronizing only the changed data. Skyvia also maps records by their primary key values to each other, so it does not require different sources to have the same primary key structure. It still can match the corresponding records without having to add any additional columns or changing data structure. The only requirement for synchronization is that primary keys must be autogenerated. With Skyvia it’s not necessary for data to have the same structure in integrated data storages. Skyvia supports powerful mapping mechanisms that allow synchronizing data with completely different structure. It provides support for complex mathematical and string expressions when mapping data, using lookups, etc. You may use data splitting – loading data from a single CSV file or source table to multiple related target tables. Or you may load data from several source CSV files or tables to several related target tables. In each case Skyvia preserves data relations. It builds corresponding relations between the target data automatically. When you often work with cloud CRM data, native CRM data reporting and analysis tools may be not enough for you. And there is a vast set of professional data analysis and reporting tools available for SQL Server. With Skyvia you can quickly copy your cloud CRM data to an SQL Server database and apply corresponding SQL Server tools to the data. In such case you can use Skyvia data replication tools. It allows you to quickly copy cloud CRM data to SQL Server or other databases without customizing any mapping. You need just to specify columns to copy data from. Target database tables will be created automatically. Skyvia offers powerful filtering settings to replicate only the records you need. Skyvia also provides capability to export data from SQL Server (including SQL Azure) and other databases and cloud CRMs to CSV files. These files can be either downloadable manually or loaded to cloud file storages or FTP server. You can use export, for example, to backup SQL Azure data to Dropbox. Any data integration operation can be scheduled for automatic execution. Thus, you can automate your SQL Azure data backup or data synchronization – just configure it once, then schedule it, and benefit from automatic data integration with Skyvia. Currently registration and using Skyvia is completely free, so you can try it yourself and find out whether its data migration and integration tools suits for you. Visit this link to register on Skyvia: https://app.skyvia.com/register Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Cloud Computing

    Read the article

  • JDBC Triggers

    - by Tim Dexter
    Received a question from a customer last week, they were using the new rollup patch on top of 10.1.3.4.1. What are these boxes for? Don't you know? Surely? Well, they are for ... that new functionality, you know it's in the user docs, that thingmabobby doodah. OK, I dont know either, I can have a guess but let me check first. Serveral IM sessions, emails and a dig through the readme for the new patch and I had my answer. Its not in the official documentation, yet. Leslie is on the case. The two fields were designed to allow an Admin to set a users context attributes before a connection is made to a database and for un-setting the attributes after the connection is broken by the extraction engine. We got a sample from the Enterprise Manager team on how they will be using it with their VPD connections. FUNCTION bip_to_em_user (user_name_in IN VARCHAR2) RETURN BOOLEAN IS BEGIN SETEMUSERCONTEXT(user_name_in, MGMT_USER.OP_SET_IDENTIFIER); return TRUE; END bip_to_em_user; And used in the jdbc data source definition like this (pre-process function): sysman.mgmt_bip.bip_to_em_user(:xdo_user_name) You, of course can call any function that is going to return a boolean value, another example might be. FUNCTION set_per_process_username (username_in IN VARCHAR2) RETURN BOOLEAN IS BEGIN SETUSERCONTEXT(username_in); return TRUE; END set_per_process_username Just use your own function/package to set some user context. Very grateful for the mail from Leslie on the EM team's usage but I had to try it out. Rather than set up a VPD, I opted for a simpler test. Can I log the comings and goings of users and their queries using the same pre-process text box. Reaching back into the depths of my developer brain to remember some pl/sql, it was not that deep and I came up with: CREATE OR REPLACE FUNCTION BIPTEST (user_name_in IN VARCHAR2, smode IN VARCHAR2) RETURN BOOLEAN AS BEGIN INSERT INTO LOGTAB VALUES(user_name_in, sysdate,smode); RETURN true; END BIPTEST; To call it in the pre-fetch trigger. BIPTEST(:xdo_user_name) Not going to set the pl/sql world alight I know, but you get the idea. As a new connection is made to the database its logged in the LOGTAB table. The SMODE value just sets if its an entry or an exit. I used the pre- and post- boxes. NAME UPDATE_DATE S_FLAG oracle 14-MAY-10 09.51.34.000000000 AM Start oracle 14-MAY-10 10.23.57.000000000 AM Finish administrator 14-MAY-10 09.51.38.000000000 AM Start administrator 14-MAY-10 09.51.38.000000000 AM Finish oracle 14-MAY-10 09.51.42.000000000 AM Start oracle 14-MAY-10 09.51.42.000000000 AM Finish It works very well, I had some fun trying to find a nasty query for the extraction engine so that the timestamps from in to out actually had a difference. That engine is fast! The only derived value you can pass from BIP is :xdo_user_name. None of the other server values are available. Connection pools are not currently supported but planned for a future release. Now you know what those fields are for and look for some official documentation, rather than my ramblings, coming soon!

    Read the article

  • SQL SERVER – A Puzzle – Swap Value of Column Without Case Statement

    - by pinaldave
    For the last few weeks, I have been doing Friday Puzzles and I am really loving it. Yesterday I received a very interesting question by Navneet Chaurasia on Facebook Page. He was asked this question in one of the interview questions for job. Please read the original thread for a complete idea of the conversation. I am presenting the same question here. Puzzle Let us assume there is a single column in the table called Gender. The challenge is to write a single update statement which will flip or swap the value in the column. For example if the value in the gender column is ‘male’ swap it with ‘female’ and if the value is ‘female’ swap it with ‘male’. Here is the quick setup script for the puzzle. USE tempdb GO CREATE TABLE SimpleTable (ID INT, Gender VARCHAR(10)) GO INSERT INTO SimpleTable (ID, Gender) SELECT 1, 'female' UNION ALL SELECT 2, 'male' UNION ALL SELECT 3, 'male' GO SELECT * FROM SimpleTable GO The above query will return following result set. The puzzle was to write a single update column which will generate following result set. There are multiple answers to this simple puzzle. Let me show you three different ways. I am assuming that the column will have either value ‘male’ or ‘female’ only. Method 1: Using CASE Statement I believe this is going to be the most popular solution as we are all familiar with CASE Statement. UPDATE SimpleTable SET Gender = CASE Gender WHEN 'male' THEN 'female' ELSE 'male' END GO SELECT * FROM SimpleTable GO Method 2: Using REPLACE  Function I totally understand it is the not cleanest solution but it will for sure work in giving situation. UPDATE SimpleTable SET Gender = REPLACE(('fe'+Gender),'fefe','') GO SELECT * FROM SimpleTable GO Method 3: Using IIF in SQL Server 2012 If you are using SQL Server 2012 you can use IIF and get the same effect as CASE statement. UPDATE SimpleTable SET Gender = IIF(Gender = 'male', 'female', 'male') GO SELECT * FROM SimpleTable GO You can read my article series on SQL Server 2012 various functions over here. SQL SERVER – Denali – Logical Function – IIF() – A Quick Introduction SQL SERVER – Detecting Leap Year in T-SQL using SQL Server 2012 – IIF, EOMONTH and CONCAT Function Let us clean up. DROP TABLE SimpleTable GO Question to you: I came up with three simple tricks where there is a single UPDATE statement which swaps the values in the column. Do you know any other simple trick? If yes, please post here in the comments. I will pick two random winners from all the valid answers. Winners will get 1) Print Copy of SQL Server Interview Questions and Answers 2) Free Learning Code for Online Video Courses I will announce the winners on coming Monday. Reference:  Pinal Dave (http://blog.SQLAuthority.com) Filed under: CodeProject, PostADay, SQL, SQL Authority, SQL Interview Questions and Answers, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • A quick look at: sys.dm_os_buffer_descriptors

    - by Jonathan Allen
    SQL Server places data into cache as it reads it from disk so as to speed up future queries. This dmv lets you see how much data is cached at any given time and knowing how this changes over time can help you ensure your servers run smoothly and are adequately resourced to run your systems. This dmv gives the number of cached pages in the buffer pool along with the database id that they relate to: USE [tempdb] GO SELECT COUNT(*) AS cached_pages_count , CASE database_id WHEN 32767 THEN 'ResourceDb' ELSE DB_NAME(database_id) END AS Database_name FROM sys.dm_os_buffer_descriptors GROUP BY DB_NAME(database_id) , database_id ORDER BY cached_pages_count DESC; This gives you results which are quite useful, but if you add a new column with the code: …to convert the pages value to show a MB value then they become more relevant and meaningful. To see how your server reacts to queries, start up SSMS and connect to a test server and database – mine is called AdventureWorks2008. Make sure you start from a know position by running: -- Only run this on a test server otherwise your production server's-- performance may drop off a cliff and your phone will start ringing. DBCC DROPCLEANBUFFERS GO Now we can run a query that would normally turn a DBA’s hair white: USE [AdventureWorks2008] go SELECT * FROM [Sales].[SalesOrderDetail] AS sod INNER JOIN [Sales].[SalesOrderHeader] AS soh ON [sod].[SalesOrderID] = [soh].[SalesOrderID] …and then check our cache situation: A nice low figure – not! Almost 2000 pages of data in cache equating to approximately 15MB. Luckily these tables are quite narrow; if this had been on a table with more columns then this could be even more dramatic. So, let’s make our query more efficient. After resetting the cache with the DROPCLEANBUFFERS and FREEPROCCACHE code above, we’ll only select the columns we want and implement a WHERE predicate to limit the rows to a specific customer. SELECT [sod].[OrderQty] , [sod].[ProductID] , [soh].[OrderDate] , [soh].[CustomerID] FROM [Sales].[SalesOrderDetail] AS sod INNER JOIN [Sales].[SalesOrderHeader] AS soh ON [sod].[SalesOrderID] = [soh].[SalesOrderID] WHERE [soh].[CustomerID] = 29722 …and check our effect cache: Now that is more sympathetic to our server and the other systems sharing its resources. I can hear you asking: “What has this got to do with logging, Jonathan?” Well, a smart DBA will keep an eye on this metric on their servers so they know how their hardware is coping and be ready to investigate anomalies so that no ‘disruptive’ code starts to unsettle things. Capturing this information over a period of time can lead you to build a picture of how a database relies on the cache and how it interacts with other databases. This might allow you to decide on appropriate schedules for over night jobs or otherwise balance the work of your server. You could schedule this job to run with a SQL Agent job and store the data in your DBA’s database by creating a table with: IF OBJECT_ID('CachedPages') IS NOT NULL DROP TABLE CachedPages CREATE TABLE CachedPages ( cached_pages_count INT , MB INT , Database_Name VARCHAR(256) , CollectedOn DATETIME DEFAULT GETDATE() ) …and then filling it with: INSERT INTO [dbo].[CachedPages] ( [cached_pages_count] , [MB] , [Database_Name] ) SELECT COUNT(*) AS cached_pages_count , ( COUNT(*) * 8.0 ) / 1024 AS MB , CASE database_id WHEN 32767 THEN 'ResourceDb' ELSE DB_NAME(database_id) END AS Database_name FROM sys.dm_os_buffer_descriptors GROUP BY database_id After this has been left logging your system metrics for a while you can easily see how your databases use the cache over time and may see some spikes that warrant your attention. This sort of logging can be applied to all sorts of server statistics so that you can gather information that will give you baseline data on how your servers are performing. This means that when you get a problem you can see what statistics are out of their normal range and target you efforts to resolve the issue more rapidly.

    Read the article

  • SQLAuthority News – Amazon Gift Card Raffle for Beta Tester Feedback for NuoDB

    - by pinaldave
    As regular readers know I’ve been spending some time working with the NuoDB beta software. They contacted me last week and asked if I would give you a chance to try their new web-based console for their scalable, SQL-compliant database. They have just put out their final beta release, Beta 9.  It contains a preview of a new web-based “NuoConsole” that will replace and extend the functionality of their current desktop version.  I haven’t spent any time with the new console yet but a really quick look tells me it should make it easier to do deeper monitoring than the older one. It also looks like they have added query-level reporting through the console. I will try to play with it soon. NuoDB is doing a last, big push to get some more feedback from developers before they release their 1.0 product sometime in the next several weeks. Since the console is new, they are especially interested in some quick feedback on it before general availability. For SQLAuthority readers only, NuoDB will raffle off three $50 Amazon gift cards in exchange for your feedback on the NuoConsole preview. Here’s how to Enter Download NuoDBeta 9 here You must build a domain before you can start the console. Launch the Web Console. Windows Code: start java -jar jarnuodbwebconsole.jar Mac, Linux, Solaris, Unix Code: java -jar jar/nuodbwebconsole.jar Access the Web Console: Code: http://localhost:8080 When you have tried it out, go to a short (8 question) survey to enter the raffle Click here for the survey You must complete the survey before midnight EDT on October 17, 2012. Here’s what else they are saying about this last beta before general availability: Beta 9 now supports the Zend PHP framework so that PHP developers can directly integrate web applications with NuoDB. Multi-threaded HDFS support – NuoDB Storage Managers can now be configured to persist data to the high performance Hadoop distributed file system (HDFS). Beta 9 optimizes for multi-thread I/O streams at maximum performance. This enhancement allows users to make Hadoop their core storage with no extra effort which is a pretty cool idea. Improved Performance –On a single transaction node, Beta 9 offers performance comparable with MySQL and MariaDB. As additional nodes are added, NuoDB performance improves significantly at near linear scale. Query & Explain Plan Logging – Beta 9 introduces SQL explain plans for your queries. Qualify queries with the word “EXPLAIN” and NuoDB will respond with the details of the execution plan allowing performance optimization to SQL. Through the NuoConsole, you can now kill hung or long running queries. Java App Server Support – Beta 9 now supports leading Web JEE app servers including JBoss, Tomcat, and ColdFusion. They’ve also reported: Improved PHP/PDO drivers Support for Drupal Faster Ruby on Rails driver The Hibernate Dialect supports version 4.1 And good news for my readers: numerous SQL enhancements They will share the results of the web console feedback with me.  I’ll let you know how it goes. Also the winner of their last contest was Jaime Martínez Lafargue!  Do leave a comment here once you complete the survey.  Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL Authority Tagged: NuoDB

    Read the article

  • Iron Speed Designer 7.0 - the great gets greater!

    - by GGBlogger
    For Immediate Release Iron Speed, Inc. Kelly Fisher +1 (408) 228-3436 [email protected] http://www.ironspeed.com       Iron Speed Version 7.0 Generates SharePoint Applications New! Support for Microsoft SharePoint speeds application generation and deployment   San Jose, CA – June 8, 2010. Software development tools-maker Iron Speed, Inc. released Iron Speed Designer Version 7.0, the latest version of its popular Web 2.0 application generator. Iron Speed Designer generates rich, interactive database and reporting applications for .NET, Microsoft SharePoint and the Cloud.    In addition to .NET applications, Iron Speed Designer V7.0 generates database-driven SharePoint applications. The ability to quickly create database-driven applications for SharePoint eliminates a lot of work, helping IT departments generate productivity-enhancing applications in just a few hours.  Generated applications include integrated SharePoint application security and use SharePoint master pages.    “It’s virtually impossible to build database-driven application in SharePoint by hand. Iron Speed Designer V7.0 not only makes this possible, the tool makes it easy.” – Razi Mohiuddin, President, Iron Speed, Inc.     Integrated SharePoint application security Generated applications include integrated SharePoint application security. SharePoint sites and their groups are used to retrieve security roles. Iron Speed Designer validates the user against a Microsoft SharePoint server on your network by retrieving the logged in user’s credentials from the SharePoint Context.    “The Iron Speed Designer generated application integrates seamlessly with SharePoint security, removing the hassle of designing, testing and approving your own security layer.” -Michael Landi, Solutions Architect, Light Speed Solutions     SharePoint Solution Packages Iron Speed Designer V7.0 creates SharePoint Solution Packages (WSPs) for easy application deployment. Using the Deployment Wizard, a single application WSP is created and can be deployed to your SharePoint server.   “Iron Speed Designer is the first product on the market that allows easy and painless deployment of database-driven .NET web applications inside the SharePoint environment.” -Bryan Patrick, Developer, Pseudo Consulting     SharePoint master pages and themes In V7.0, generated applications use SharePoint master pages and contain the same content as other SharePoint pages. Generated applications use the current SharePoint color scheme and display standard SharePoint navigation controls on each page.   “Iron Speed Designer preserves the look and feel of the SharePoint environment in deployed database applications without additional hand-coding.” -Kirill Dmitriev, Software Developer, Iron Speed, Inc.     Iron Speed Designer Version 7.0 System Requirements Iron Speed Designer Version 7.0 runs on Microsoft Windows 7, Windows Vista, Windows XP, and Windows Server 2003 and 2008. It generates .NET Web applications for Microsoft SQL Server, Oracle, Microsoft Access and MySQL. These applications may be deployed on any machine running the .NET Framework. Iron Speed Designer supports Microsoft SharePoint 2007 and Windows SharePoint Services (WSS3). Find complete information about Iron Speed Designer Version 7.0 at www.ironspeed.com.     About Iron Speed, Inc. Iron Speed is the leader in enterprise-class application generation. Our software development tools generate database and reporting applications in significantly less time and cost than hand-coding. Our flagship product, Iron Speed Designer, is the fastest way to deliver applications for the Microsoft .NET and software-as-a-service cloud computing environments.   With products built on decades of experience in enterprise application development and large-scale e-commerce systems, Iron Speed products eliminate the need for developers to choose between "full featured" and "on schedule."   Founded in 1999, Iron Speed is well funded with a capital base of over $20M and strategic investors that include Arrow Electronics and Avnet, as well as executives from AMD, Excelan, Onsale, and Oracle. The company is based in San Jose, Calif., and is located online at www.ironspeed.com.

    Read the article

  • GRUB 2 problem after Mac OS X update

    - by vallllll
    I have a MacBook Pro in dual boot Mac OS X / Ubuntu 12.04 (Precise Pangolin). When I boot it I have a rEFIt menu, and I can chose between Mac OS X and Linux. A few days ago I have updated Mac OS X from 10.7 (Lion) to 10.8 (Mountain Lion) using a .dmg image provided by my company. Since then when I select Linux in rEFIt it says: No bootable device --insert boot disk and press any key I have tried going to rEFIt partitioning tool. This is what I got: As suggested in Mac OSX Mavericks update rEFIT broken I wanted to fix the issue the same way as AndrewM, but I don't have the option "MBR table must be updated". Then I booted on Ubuntu 12.04 CD, chose repair broken system, chose root patition /dev/sda6 as this is where my Ubuntu file system is. I got a shell, but I don't really know how to repair the poblem since if it was just Windows dual boot. A GRUB update would solve the issue, but here I don't know where the GRUB 2 is installed. Here are results from Parted, and it is a bit confusing for me as the Mac partition is the one with boot: As you can see the entry 1 is an EFI system partition and is the boot partition, so I wonder if I should install GRUB there or in sda6, which is the Ubuntu filesystem. I am not sure should I work on rEFIt shell or Ubuntu. Unfortunately, I don't remember where GRUB was before update. UPDATE: using same link above I have tried RoundSparrow hilltx answer and installed rEFInd, but the result is same.... still no bootable device when I select Linux. UPDATE 2: just used alternate CD again, mounted on /dev/sda6 and the ran update-grub. It seemed to wok and started listing all my kernels. But after rebooting several times still no bootable device when I select Linux in rEFInd. UDATE 3: Have tried to boot from Ubuntu cd and select "boot from first available filesystem. I got error and dropped to grub rescue shell. I even followed the indications on this link but was unable to boot as I tried to use sdb6 but no luck UPDATE 4 as per Rob Smith request here is out put from ls -l $(find /EFI -iname "*.efi") *MACOSX -rw-r--r--@ 1 root admin 55048 29 oct 17:44 /EFI/refind/drivers_x64/btrfs_x64.efi -rw-r--r--@ 1 root admin 38888 29 oct 17:44 /EFI/refind/drivers_x64/ext2_x64.efi -rw-r--r--@ 1 root admin 39304 29 oct 17:44 /EFI/refind/drivers_x64/ext4_x64.efi -rw-r--r--@ 1 root admin 43432 29 oct 17:44 /EFI/refind/drivers_x64/hfs_x64.efi -rw-r--r--@ 1 root admin 38984 29 oct 17:44 /EFI/refind/drivers_x64/iso9660_x64.efi -rw-r--r--@ 1 root admin 43656 29 oct 17:44 /EFI/refind/drivers_x64/reiserfs_x64.efi -rw-r--r--@ 1 root admin 175016 29 oct 17:44 /EFI/refind/refind_x64.efi -rw-rw-r-- 1 root admin 73232 7 mar 2010 /EFI/tools/dbounce.efi -rw-rw-r-- 1 root admin 763248 7 mar 2010 /EFI/tools/dhclient.efi -rw-rw-r-- 1 root admin 67024 7 mar 2010 /EFI/tools/drawbox.efi -rw-rw-r-- 1 root admin 71312 7 mar 2010 /EFI/tools/dumpfv.efi -rw-rw-r-- 1 root admin 84848 7 mar 2010 /EFI/tools/dumpprot.efi -rw-rw-r-- 1 root admin 472912 7 mar 2010 /EFI/tools/ed.efi -rw-rw-r-- 1 root admin 143856 7 mar 2010 /EFI/tools/edit.efi -rw-rw-r-- 1 root admin 1801008 7 mar 2010 /EFI/tools/ftp.efi -rw-r--r--@ 1 root admin 47848 29 oct 17:44 /EFI/tools/gptsync_x64.efi -rw-rw-r-- 1 root admin 320560 7 mar 2010 /EFI/tools/hexdump.efi -rw-rw-r-- 1 root admin 286384 7 mar 2010 /EFI/tools/hostname.efi -rw-rw-r-- 1 root admin 534416 7 mar 2010 /EFI/tools/ifconfig.efi -rw-rw-r-- 1 root admin 395344 7 mar 2010 /EFI/tools/loadarg.efi -rw-rw-r-- 1 root admin 587408 7 mar 2010 /EFI/tools/ping.efi -rw-rw-r-- 1 root admin 730416 7 mar 2010 /EFI/tools/pppd.efi -rw-rw-r-- 1 root admin 561360 7 mar 2010 /EFI/tools/route.efi -rw-rw-r-- 1 root admin 1961712 7 mar 2010 /EFI/tools/shell.efi -rw-rw-r-- 1 root admin 750224 7 mar 2010 /EFI/tools/tcpipv4.efi -rw-rw-r-- 1 root admin 4048 7 mar 2010 /EFI/tools/textmode.efi -rw-rw-r-- 1 root admin 320656 7 mar 2010 /EFI/tools/which.efi *LINUX

    Read the article

  • Healthcare and Distributed Data Don't Mix

    - by [email protected]
    How many times have you heard the story?  Hard disk goes missing, USB thumb drive goes missing, laptop goes missing...Not a week goes by that we don't hear about our data going missing...  Healthcare data is a big one, but we hear about credit card data, pricing info, corporate intellectual property...  When I have spoken at Security and IT conferences part of my message is "Why do you give your users data to lose in the first place?"  I don't suggest they can't have access to it...in fact I work for the company that provides the premiere data security and desktop solutions that DO provide access.  Access isn't the issue.  'Keeping the data' is the issue.We are all human - we all make mistakes... I fault no one for having their car stolen or that they dropped a USB thumb drive. (well, except the thieves - I can certainly find some fault there)  Where I find fault is in policy (or lack thereof sometimes) that allows users to carry around private, and important, data with them.  Mr. Director of IT - It is your fault, not theirs.  Ms. CSO - Look in the mirror.It isn't like one can't find a network to access the data from.  You are on a network right now.  How many Wireless ones (wifi, mifi, cellular...) are there around you, right now?  Allowing employees to remove data from the confines of (wait for it... ) THE DATA CENTER is just plain indefensible when it isn't required.  The argument that the laptop had a password and the hard disk was encrypted is ridiculous.  An encrypted drive tells thieves that before they sell the stolen unit for $75, they should crack the encryption and ascertain what the REAL value of the laptop is... credit card info, Identity info, pricing lists, banking transactions... a veritable treasure trove of info people give away on an 'encrypted disk'.What started this latest rant on lack of data control was an article in Government Health IT that was forwarded to me by Denny Olson, an Oracle Principal Sales Consultant in Minnesota.  The full article is here, but the point was that a couple laptops went missing in a couple different cases, and.. well... no one knows where the data is, and yes - they were loaded with patient info.  What were you thinking?Obviously you can't steal data form a Sun Ray appliance... since it has no data, nor any storage to keep the data on, and Secure Global Desktop allows access from Macs, Linux and Windows client devices...  but in all cases, there is no keeping the data unless you explicitly allow for it in your policy.   Since you can get at the data securely from any network, why would you want to take personal responsibility for it?  Both Sun Rays and Secure Global Desktop are widely used in Healthcare... but clearly not widely enough.We need to do a better job of getting the message out -  Healthcare (or insert your business type here) and distributed data don't mix. Then add Hot Desking and 'follow me printing' and you have something that Clinicians (and CSOs) love.Thanks for putting up my blood pressure, Denny.

    Read the article

  • Big Data – Data Mining with Hive – What is Hive? – What is HiveQL (HQL)? – Day 15 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the operational database in Big Data Story. In this article we will understand what is Hive and HQL in Big Data Story. Yahoo started working on PIG (we will understand that in the next blog post) for their application deployment on Hadoop. The goal of Yahoo to manage their unstructured data. Similarly Facebook started deploying their warehouse solutions on Hadoop which has resulted in HIVE. The reason for going with HIVE is because the traditional warehousing solutions are getting very expensive. What is HIVE? Hive is a datawarehouseing infrastructure for Hadoop. The primary responsibility is to provide data summarization, query and analysis. It  supports analysis of large datasets stored in Hadoop’s HDFS as well as on the Amazon S3 filesystem. The best part of HIVE is that it supports SQL-Like access to structured data which is known as HiveQL (or HQL) as well as big data analysis with the help of MapReduce. Hive is not built to get a quick response to queries but it it is built for data mining applications. Data mining applications can take from several minutes to several hours to analysis the data and HIVE is primarily used there. HIVE Organization The data are organized in three different formats in HIVE. Tables: They are very similar to RDBMS tables and contains rows and tables. Hive is just layered over the Hadoop File System (HDFS), hence tables are directly mapped to directories of the filesystems. It also supports tables stored in other native file systems. Partitions: Hive tables can have more than one partition. They are mapped to subdirectories and file systems as well. Buckets: In Hive data may be divided into buckets. Buckets are stored as files in partition in the underlying file system. Hive also has metastore which stores all the metadata. It is a relational database containing various information related to Hive Schema (column types, owners, key-value data, statistics etc.). We can use MySQL database over here. What is HiveSQL (HQL)? Hive query language provides the basic SQL like operations. Here are few of the tasks which HQL can do easily. Create and manage tables and partitions Support various Relational, Arithmetic and Logical Operators Evaluate functions Download the contents of a table to a local directory or result of queries to HDFS directory Here is the example of the HQL Query: SELECT upper(name), salesprice FROM sales; SELECT category, count(1) FROM products GROUP BY category; When you look at the above query, you can see they are very similar to SQL like queries. Tomorrow In tomorrow’s blog post we will discuss about very important components of the Big Data Ecosystem – Pig. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Recover Lost Form Data in Firefox

    - by Asian Angel
    Have you ever filled in a text area or form in a webpage and something happens before you can finish it? If you like the idea of recovering that lost data then you will want to have a look at the Lazarus: Form Recovery extension for Firefox. Lazarus: Form Recovery in Action For our first example we chose the comment text box area for one of the articles here at the website. As you can see we were not finished typing in the whole comment yet… Notice the “Lazarus Icon” in the lower right corner. Note: We simulated accidental tab closures for our two examples. After getting our webpage opened up again all of our text was gone. Right clicking within the text area showed two options available…”Recover Text & Recover Form”. Notice that our lost text was listed as a “sub menu”…this could be extremely useful in matching up the appropriate text to the correct webpage if you had multiple tabs open before something happened. Click on the correct text listing to insert it. So easy to finish writing our comment without having to start from zero again. In our second example we chose the sign-up form page for the website. As before we were not finished filling in the form… Getting the webpage opened back up showed the same problem as before…all the entered text was lost. This time we right clicked in the browser window area and there was that wonderful “Recover Form Command” waiting to be used. One click and… All of our lost form data was back and we were able to finish filling in the form. For those who may be interested you can disable Lazarus: Form Recovery on individual websites using the “Context Menu” for the “Status Bar Icon” Options There are three sections in the options and you should take a quick look through them to make any desired modifications in how Lazarus: Form Recovery functions. The first “Options Area” focuses on display/access for the extension. The second “Options Area” allows you to expand the type of data retained, enable removal of data within a given time frame, set up a password, disable search indexing, and enable form data retention while in “Private Browsing Mode”. The third “Options Area” focuses on the Lazarus database itself. Conclusion If you have ever lost text area or form data before then you know how much time could be lost in starting over. Lazarus: Form Recovery helps provide a nice backup solution to get you up and running once again with a minimum of effort. Links Download the Lazarus: Form Recovery extension (Mozilla Add-ons) Download the Lazarus: Form Recovery extension (Extension Homepage) Similar Articles Productive Geek Tips Quick Tip: Resize Any Textbox or Textarea in FirefoxWhy Doesn’t AutoComplete Always Work in Firefox?Pass Variables between Windows Forms Windows without ShowDialog()Using Secure Login in FirefoxAdd Search Forms to the Firefox Search Bar TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems Icelandic Volcano Webcams Open Multiple Links At One Go

    Read the article

  • Bitmask data insertions in SSDT Post-Deployment scripts

    - by jamiet
    On my current project we are using SQL Server Data Tools (SSDT) to manage our database schema and one of the tasks we need to do often is insert data into that schema once deployed; the typical method employed to do this is to leverage Post-Deployment scripts and that is exactly what we are doing. Our requirement is a little different though, our data is split up into various buckets that we need to selectively deploy on a case-by-case basis. I was going to use a SQLCMD variable for each bucket (defaulted to some value other than “Yes”) to define whether it should be deployed or not so we could use something like this in our Post-Deployment script: IF ($(DeployBucket1Flag) = 'Yes')BEGIN   :r .\Bucket1.data.sqlENDIF ($(DeployBucket2Flag) = 'Yes')BEGIN   :r .\Bucket2.data.sqlENDIF ($(DeployBucket3Flag) = 'Yes')BEGIN   :r .\Bucket3.data.sqlEND That works fine and is, I’m sure, a very common technique for doing this. It is however slightly ugly because we have to litter our deployment with various SQLCMD variables. My colleague James Rowland-Jones (whom I’m sure many of you know) suggested another technique – bitmasks. I won’t go into detail about how this works (James has already done that at Using a Bitmask - a practical example) but I’ll summarise by saying that you can deploy different combinations of the buckets simply by supplying a different numerical value for a single SQLCMD variable. Each bit of that value’s binary representation signifies whether a particular bucket should be deployed or not. This is better demonstrated using the following simple script (which can be easily leveraged inside your Post-Deployment scripts): /* $(DeployData) is a SQLCMD variable that would, if you were using this in SSDT, be declared in the SQLCMD variables section of your project file. It should contain a numerical value, defaulted to 0. In this example I have declared it using a :setvar statement. Test the affect of different values by changing the :setvar statement accordingly. Examples: :setvar DeployData 1 will deploy bucket 1 :setvar DeployData 2 will deploy bucket 2 :setvar DeployData 3   will deploy buckets 1 & 2 :setvar DeployData 6   will deploy buckets 2 & 3 :setvar DeployData 31  will deploy buckets 1, 2, 3, 4 & 5 */ :setvar DeployData 0 DECLARE  @bitmask VARBINARY(MAX) = CONVERT(VARBINARY,$(DeployData)); IF (@bitmask & 1 = 1) BEGIN     PRINT 'Bucket 1 insertions'; END IF (@bitmask & 2 = 2) BEGIN     PRINT 'Bucket 2 insertions'; END IF (@bitmask & 4 = 4) BEGIN     PRINT 'Bucket 3 insertions'; END IF (@bitmask & 8 = 8) BEGIN     PRINT 'Bucket 4 insertions'; END IF (@bitmask & 16 = 16) BEGIN     PRINT 'Bucket 5 insertions'; END An example of running this using DeployData=6 The binary representation of 6 is 110. The second and third significant bits of that binary number are set to 1 and hence buckets 2 and 3 are “activated”. Hope that makes sense and is useful to some of you! @Jamiet P.S. I used the awesome HTML Copy feature of Visual Studio’s Productivity Power Tools in order to format the T-SQL code above for this blog post.

    Read the article

  • SQL SERVER – Clustered Index and Primary Key – Contest Win Joes 2 Pros Combo (USD 198) – Day 3 of 5

    - by pinaldave
    August 2011 we ran a contest where every day we give away one book for an entire month. The contest had extreme success. Lots of people participated and lots of give away. I have received lots of questions if we are doing something similar this month. Absolutely, instead of running a contest a month long we are doing something more interesting. We are giving away USD 198 worth gift every day for this week. We are giving away Joes 2 Pros 5 Volumes (BOOK) SQL 2008 Development Certification Training Kit every day. One copy in India and One in USA. Total 2 of the giveaway (worth USD 198). All the gifts are sponsored from the Koenig Training Solution and Joes 2 Pros. The books are available here Amazon | Flipkart | Indiaplaza How to Win: Read the Question Read the Hints Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India residents only) 2 Winners will be randomly selected announced on August 20th. Question of the Day: Which of the following datatype is usually NOT the best choice for Primary Key and Clustered Index? a) INT b) BIGINT c) GUID d) SMALLINT Query Hints: BIG HINT POST The clustered index is the placement order of a table’s records in memory pages. When you insert new records, then each record will be inserted into the memory page in the order it belongs. In the figure below we see another new record (Major Disarray) being inserted, in sequence, between Jonny and Rick. Since there is no room in this memory page, some records will need to shift around. The page split occurs when Irenes’ record moves to the second page. Page splits are considered very bad for performance, and there are a number of techniques to reduce, or even eliminate, the risk of page splits. You can create a clustered index on the table on any field you choose. Sometime SQL will create a clustered index for you. Often times the field having the Primary Key makes a great candidate for the clustered index. Additional Hints: I have previously discussed various concepts from SQL Server Joes 2 Pros Volume 3. SQL Joes 2 Pros Development Series – All about SQL Statistics SQL Joes 2 Pros Development Series – Introduction to Page Split SQL Joes 2 Pros Development Series – The Clustered Index – Simple Understanding SQL Joes 2 Pros Development Series – Geography Data Type – Calculating Distance Between Two Points on the Earth SQL Joes 2 Pros Development Series – Sparse Data and Space Used by Sparse Data SQL Joes 2 Pros Development Series – System and Time Data Types SQL Joes 2 Pros Development Series – Data Row Space Usage and NULL Storage Next Step: Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India) Bonus Winner Leave a comment with your favorite article from the “additional hints” section and you may be eligible for surprise gift. There is no country restriction for this Bonus Contest. Do mention why you liked it any particular blog post and I will announce the winner of the same along with the main contest. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How to Use Sparklines in Excel 2010

    - by DigitalGeekery
    One of the cool features of Excel 2010 is the addition of Sparklines. A Sparkline is basically a little chart displayed in a cell representing your selected data set that allows you to quickly and easily spot trends at a glance. Inserting Sparklines on your Spreadsheet You will find the Sparklines group located on the Insert tab.   Select the cell or cells where you wish to display your Sparklines. Select the type of Sparkline you’d like to add to your spreadsheet. You’ll notice there are three types of Sparklines, Line, Column, and Win/Loss. We’ll select Line for our example. A Create Sparklines pops up and will prompt you to enter a Data Range you are using to create the Sparklines. You’ll notice that the location range (the range where the Sparklines will appear) is already filled in. You can type in the data range manually, or click and drag with your mouse across to select the data range. This will auto-fill the data range for you. Click OK when you are finished.   You will see your Sparklines appear in the desired cells.   Customizing Sparklines Select the one of more of the Sparklines to reveal the Design tab. You can display certain value points like high and low points, negative points, and first and last points by selecting the corresponding options from the Show group. You can also mark all value points by selecting  Markers. Select your desired Sparklines and click one of the included styles from the Style group on the Design tab. Click the down arrow on the lower right corner of the box to display additional pre-defined styles…   or select Sparkline Color or Marker Color options to fully customize your Sparklines. The Axis options allow additional options such as Date Axis Type, Plotting Data Left to Right, and displaying an axis point to represent the zero line in your data with Show Axis. Column Sparklines Column Sparklines display your data in individual columns as opposed to the Line view we’ve been using for our examples. Win/Loss Sparklines Win/Loss shows a basic positive or negative representation of your data set.   You can easily switch between different Sparkline types by simply selecting the current cells (individually or the entire group), and then clicking the desired type on the Design tab. For those that may be more visually oriented, Sparklines can be a wonderful addition to any spreadsheet. Are you just getting started with Office 2010? Check out some of our other great Excel posts such as how to copy worksheets, print only selected areas of a spreadsheet, and how to share data with Excel in Office 2010. Similar Articles Productive Geek Tips Convert a Row to a Column in Excel the Easy WayShare Access Data with Excel in Office 2010Make Excel 2007 Print Gridlines In Workbook FileMake Excel 2007 Always Save in Excel 2003 FormatConvert Older Excel Documents to Excel 2007 Format TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7

    Read the article

  • Do we still have a case against the goto statement? [closed]

    - by FredOverflow
    Possible Duplicate: Is it ever worthwhile using goto? In a recent article, Andrew Koenig writes: When asked why goto statements are harmful, most programmers will say something like "because they make programs hard to understand." Press harder, and you may well hear something like "I don't really know, but that's what I was taught." For that reason, I'd like to summarize Dijkstra's arguments. He then shows two program fragments, one without a goto and and one with a goto: if (n < 0) n = 0; Assuming that n is a variable of a built-in numeric type, we know that after this code, n is nonnegative. Suppose we rewrite this fragment: if (n >= 0) goto nonneg; n = 0; nonneg: ; In theory, this rewrite should have the same effect as the original. However, rewriting has changed something important: It has opened the possibility of transferring control to nonneg from anywhere else in the program. I emphasized the part that I don't agree with. Modern languages like C++ do not allow goto to transfer control arbitrarily. Here are two examples: You cannot jump to a label that is defined in a different function. You cannot jump over a variable initialization. Now consider composing your code of tiny functions that adhere to the single responsibility principle: int clamp_to_zero(int n) { if (n >= 0) goto n_is_not_negative: n = 0; n_is_not_negative: return n; } The classic argument against the goto statement is that control could have transferred from anywhere inside your program to the label n_is_not_negative, but this simply is not (and was never) true in C++. If you try it, you will get a compiler error, because labels are scoped. The rest of the program doesn't even see the name n_is_not_negative, so it's just not possible to jump there. This is a static guarantee! Now, I'm not saying that this version is better then the one without the goto, but to make the latter as expressive as the first one, we would at least have to insert a comment, or even better yet, an assertion: int clamp_to_zero(int n) { if (n < 0) n = 0; // n is not negative at this point assert(n >= 0); return n; } Note that you basically get the assertion for free in the goto version, because the condition n >= 0 is already written in line 1, and n = 0; satisfies the condition trivially. But that's just a random observation. It seems to me that "don't use gotos!" is one of those dogmas like "don't use multiple returns!" that stem from a time where the real problem were functions of hundreds or even thousand of lines of code. So, do we still have a case against the goto statement, other than that it is not particularly useful? I haven't written a goto in at least a decade, but it's not like I was running away in terror whenever I encountered one. 1 Ideally, I would like to see a strong and valid argument against gotos that still holds when you adhere to established programming principles for clean code like the SRP. "You can jump anywhere" is not (and has never been) a valid argument in C++, and somehow I don't like teaching stuff that is not true. 1: Also, I have never been able to resurrect even a single velociraptor, no matter how many gotos I tried :(

    Read the article

  • PENGUIN IS GETTING READY FOR ORACLE OPENWORLD 2012

    - by Zeynep Koch
    Are you looking for reasons to attend Oracle Openworld, how about below Oracle Linux sessions and hands-on-labs.  1. General Session: Oracle Linux Strategy and Roadmap  In this session, Oracle executives will discuss Linux strategy; the roadmap; contributions to the Linux mainline kernel; and what's in store for upcoming releases of Oracle Linux and the Unbreakable Enterprise Kernel. Don’t miss this session. 2. New Features in Oracle Linux- A Technical Deep Dive Collaborating with the Linux community, Oracle engineers contribute to advancing Linux for mission-critical deployments. In this technical session, attendees will learn about the recent developments in Oracle Linux and the Unbreakable Enterprise Kernel 3. Why Switch to Oracle Linux?  Oracle is the only company that provides a complete Linux solution from applications to disk, fully optimized for Oracle hardware and software, with one-stop support. In this session you will hear from two customers that have successfully implemented Oracle Linux and saved 50 to 90 percent on Linux support costs as well as the reasons to switch to Oracle Linux. 4. Debugging and Configuration Best Practices for Oracle Linux This is one of our best attended sessions and most informative. In this best practices session, learn how to save time and money while preventing headaches and hassles. Discover expert secrets to get your Linux systems up and running (and keep them running), avoid common pitfalls, prevent problems, and circumvent known issues. 5. Top Technical Tips for Automatic and Secure Oracle Linux Deployments In this session, attendees will learn about how to easily deploy and install Oracle Linux systems using various technologies like Kickstart, Oracle Enterprise Manager OpsCenter, and Oracle VM Templates for applications on Linux. Additionally, the session will share useful Linux security tips and introduce utilities to help with hardening and securely operating an Oracle Linux system. We also have a great session in Oracle Develop track: 6. DTrace for Oracle Linux Initially announced at last year's Oracle Openworld, DTrace for Oracle Linux is now available for the Unbreakable Enterprise Kernel R.2. In this session held by one of the engineers working on the DTrace for Linux port, you will learn how you can use this powerful and flexible framework in your development environment. If you prefer to really have practical experience, don’t miss our two Hands-on-Labs where we will cover: HOL-1 : Oracle Linux Package Management: Configuring and Enabling Services In this session you will be Installing and configuring Oracle VM VirtualBox, importing the Oracle Linux virtual appliance. You will then use the package management on Oracle Linux using RPM and yum. You will also be able to review Ksplice, zero downtime kernel updates that enable you to apply security updates, patches and critical bug fixes without rebooting. HOL-2: Oracle Linux Storage Management with LVM and Device Mapper In this session you will learn about storage management with LVM2, the Linux Logical Volume Manager, Btrfs, preparing block devices, creating physical and logical volumes, creating file systems on top of logical volumes, and resizing file systems dynamically. You will also practice setting up software RAID devices, configuring encrypted block devices. You will also see Oracle Linux and Kpslice in the three demopods we will feature at Exhibition demogrounds. One in MySQL Connect and two in Oracle Openworld. What more do you need to come to San Francisco? Oh, I forgot to mention we also have great weather in fall.. Check out the Content Catalog and register to attend Oracle Linux sessions.

    Read the article

  • Oracle OpenWorld 2012: The Best Just Gets Better

    - by kellsey.ruppel
    For almost 30 years, Oracle OpenWorld has been the world's premier learning event for Oracle customers, developers, and partners. With more than 2,000 sessions providing best practices; demos; tips and tricks; and product insight from Oracle, customers, partners, and industry experts, Oracle OpenWorld provides more educational and networking opportunities than any other event in the world. 2011 Facts Attendees from 117 Countries Used Filtered Tap Water to Eliminate 22 Tons of Plastic Bottles Diverted Enough Trash to Fill 37 Dump Trucks 45,000+ Total Registered Attendees Oracle OpenWorld 2012: The Best Just Gets Better What's New? What's Different?  This year Oracle OpenWorld will include the Executive Edge @ OpenWorld (replacing Leaders Circle), the Customer Experience Summit @ OpenWorld, JavaOne, MySQL Connect, and the expanded Oracle PartnerNetwork Exchange @ OpenWorld. More than 50,000 customers and partners will attend OpenWorld to see Oracle's newest hardware and software products at work, and learn more about our server and storage, database, middleware, industry, and applications solutions.  New This Year: The Executive Edge @ Oracle OpenWorld (Oct 1 - 2) New at Oracle OpenWorld this year, the Executive Edge @ OpenWorld (replacing Leaders Circle) will bring together customer, partner and Oracle executives for two days of keynote presentations, summits targeted to customer industries and organizational roles, roundtable discussions, and great new networking opportunities. The Customer Experience Revolution Is Here!Customer Experience Summit @ Oracle OpenWorld (Oct 3 - 5) This dynamic new program offers more than 60 keynotes, roundtables and networking sessions exploring trends, innovations and best practices to help companies succeed with a customer experience-driven business strategy.  All Things Java -- JavaOne (Sep 30 - Oct 4) JavaOne is the world's most important event for the Java developer community. Technical sessions cover topics that span the breadth of the Java universe, with keynotes from the foremost Java visionaries and expert-led hands-on learning opportunities.  Are you innovating with Oracle Fusion Middleware?  If you are, then you need to know that the Call for Nominations for the 2012 Oracle Fusion Middleware Innovation Awards is open now through July 17, 2012. Jointly sponsored by Oracle, AUSOUG, IOUG, OAUG, ODTUG, QUEST, and UKOUG, the Oracle Fusion Middleware Innovation Awards honor organizations creatively using Oracle Fusion Middleware to deliver unique value to their enterprise.  Winning customers and partners will be hosted at Oracle OpenWorld 2012, where they can connect with Oracle executives, network with peers, and be featured in an upcoming edition of Oracle Magazine. Be sure to submit your WebCenter use case today! Oracle Music Festival his year, the first-ever Oracle Music Festival will debut, running from September 30 to October 4. In the tradition of great live music events like Coachella and SXSW, the streets of San Francisco—from 7:00 p.m. to 1:00 a.m. for five nights-into-days—will vibrate with the music of some of today’s hottest name acts, emerging and local bands, and scratching DJs. Outdoor venues and clubs near Moscone Center and the Zone (including 111 Minna, DNA, Mezzanine, Roe, Ruby Skye, Slim’s, the Taylor Street Café, Temple, Union Square, and Yerba Buena Gardens) will showcase acts that range from reggae to rock, punk to ska, R&B to country, indie to honky-tonk. After a full day of sessions and networking, you'll be primed for some late-night relaxation and rocking out at one or more of these sets.  Please note that with awesome acts, thousands of music devotees, and a limited number of venues each night, access to Festival events is on a first-come, first-served basis. Join us at the Oracle Music Festival--it's going to be epic! Save $500 on Registration with Early Bird Pricing Early Bird pricing ends July 13! Save up to $500 on registration fees by registering by Friday. Will you be attending Oracle OpenWorld 2012? We hope to see you there! Be sure to follow @oraclewebcenter on Twitter for more information and use hashtags #webcenter and #oow!

    Read the article

< Previous Page | 787 788 789 790 791 792 793 794 795 796 797 798  | Next Page >