Search Results

Search found 97400 results on 3896 pages for 'application data'.

Page 657/3896 | < Previous Page | 653 654 655 656 657 658 659 660 661 662 663 664  | Next Page >

  • Categorize data without consolidating?

    - by sqlnoob
    I have a table with about 1000 records and 2000 columns. What I want to do is categorize each row such that all records with equal column values for all columns except 'ID' are given a category ID. My final answer would look like: ID A B C ..... Category ID 1 1 0 3 1 2 2 1 3 2 3 1 0 3 1 4 2 1 3 2 5 4 5 6 3 6 4 5 6 3 where all columns (besides ID) are equal for IDs 1,3 so they get the same category ID and so on. I guess my thought was to just write a SQL query that does a group by on every single column besides 'ID' and assign a number to each group and then join back to my original table. My current input is a text file, and I have SAS, MS Access, and Excel to work with. (I could use proc sql from within SAS). Before I go this route and construct the whole query, I was just wondering if there was a better way to do this? It will take some work just to write the query, and I'm not even sure if it is practical to join on 2000 columns (never tried), so I thought I'd ask for ideas before I got too far down the wrong path. EDIT: I just realized my title doesn't really make sense. What I was originally thinking was "Is there a way I can group by and categorize at the same time without actually consolidating into groups?"

    Read the article

  • iPhone Application

    - by user553627
    Hello, Im working on an iPhone project using xcode and i actually have not programmed using objective-c before. So, my problem mainly is that my app crashes whenever i hit the button that it suppose to show a view of the world map. I think the problem is within the last 2 lines of the code but still i cant figure out why ?!! because whenever i comment out the line "[self presemtM.....]" the program doesn't crash. Would appreciate your help! -(IBAction) pushedGo:(id)sender { CLLocationCoordinate2D coord = {37.331689, -122.030731}; MapViewController *mapView = [[MapViewController alloc] initWithCoordinates:coord andTitle:@"Apple" andSubTitle:@"111"]; [self presentModalViewController:mapView animated:YES] [mapView release]; }

    Read the article

  • Reading data from USB using C#?

    - by SamGMU
    I do not want to read serial port nor other possible easy shortcuts please. I would like to know how to read a USB port in my laptop using C#. whether you can suggest a site or explain the flow of the process i will greatly appreciate your help

    Read the article

  • match two data.frame

    - by Elb
    I have a situation like this: DF1 COL1 COL2 COL3 ... a b c b d b f e a g m f DF2 COL a b c d e f g h i l m n o I would like to match each column of DF1 with the only one column of DF2 and score how many occurrence of DF2 are in each column of DF1. How this can be done? Thanks in advance, E.

    Read the article

  • Storing data from database [mysql_num_rows]

    - by user1717305
    So I have this code to pass items from database to my order table. When I'm echoing the session. The session variable contains something so there's no problem with that. But when I echo those variables under numrows, it only shows nothing. Is there something wrong? <?php error_reporting(E_ALL ^ E_NOTICE); session_start(); require("connect.php"); $UserID = $_SESSION['CustNum']; $UserN = $_SESSION['UserName']; $ProdGTotal = $_SESSION['ProdGTotal']; $queryord = mysql_query("SELECT * FROM customer WHERE UserName = '$UserN'"); $numrows = mysql_num_rows($queryord); if(numrows == 1){ $row = mysql_fetch_assoc($queryord)or die ('Unable to run query:'.mysql_error()); // fetch associated: get function from a query for a database $dbstreet = $row['Street']; $dhousenum = $row['HouseNum']; $dbcnum = $row['CelNum']; $dbarea = $row['Area']; $dbbuilding = $row['Building']; $dbcity = $row['City']; $dbpnum = $row['PhoneNum']; $dbfname = $row['FName']; $dblname = $row['LName']; } else die(mysql_error()); $query4=mysql_query("INSERT INTO orderdetails VALUES ('', '$UserID', Now(), '$dbhousenum', '$dbstreet', '$dbarea', '$dbbuilding', '$dbcity', '$dbfname', '$dblname', '$dbcnum', '$dbpnum', '$ProdGTotal')",$connect); if ($query4){ header("location:index.php"); } else die(mysql_error()); ?>

    Read the article

  • Retrieve data like rework %, schedule and effort varience from Microsoft Project

    - by Ram
    Hi, I need to generate various metric from my MS project file for the period of one month. I need to generate following reports Schedule Variance Effort Variance Rework Percentage Wasted Efforts For rework percentage, I am using condition like the task.Start date should be greater than or equal to the start date and task.Finish date should be less than or equal to finish date. but I am concerned about the tasks those are starting before the start date and ending before the end date. In such situation I only need the rework % for the number of hrs spent during start and end and not for the hrs spent before start date. Same thing applies to the task which are starting before end date but ending after end date. Any pointer would be great help. Thanks

    Read the article

  • Need help with PHP web app bootstrapping error potentially related to Zend [migrated]

    - by Matt Shepherd
    I am trying to get a program called OpenFISMA running on an Ubuntu AMI in AWS. The app is not really coded on the Ubuntu platform, but I am in my comfort zone there, and have tried both CentOS and OpenSUSE (both are sort of "native" for the app) for getting it working with the same or worse results. So, why not just get it working on Ubuntu? Anyway, the app is found here: www.openfisma.org and an install guide is found here: https://openfisma.atlassian.net/wiki/display/030100/Installation+Guide The install guide kind of sucks. It doesn't list dependencies in any coherent way or provide much of any detail (does not even mention Zend once on the entire page) so I've done a lot of work to divine the information I do have. This page provided some dependency inf (though again, Zend is not mentioned once): https://openfisma.atlassian.net/wiki/display/PUBLIC/RPM+Management#RPMManagement-BasicOverviewofRPMPackages Anyway, I got all the way through the install (so far as I could reconstruct it). I am going to the login page for the first time, and there should be some sort of bootstrapping occurring when I load the page. (I am not a programmer so I have no idea what it is doing there.) Anyway, I get a message on the web page that says: "An exception occurred while bootstrapping the application." So, then I go look in /var/www/data/logs/php.log and find this message: [22-Oct-2013 17:29:18 UTC] PHP Fatal error: Uncaught exception 'Zend_Exception' with message 'No entry is registered for key 'Zend_Log'' in /var/www/library/Zend/Registry.php:147 Stack trace: #0 /var/www/public/index.php(188): Zend_Registry::get('Zend_Log') #1 {main} thrown in /var/www/library/Zend/Registry.php on line 147 This occurs every time I load the page. I gather there is an issue related to registering the Zend_Log variable in the Zend registry, but other than that I really have no idea what to do about it. Am I missing a package that it needs, or is this app not coded to register the variables properly? I have no clue. Any help is greatly appreciated. The application file referenced in the log message (index.php) is included below. <?php /** * Copyright (c) 2008 Endeavor Systems, Inc. * * This file is part of OpenFISMA. * * OpenFISMA is free software: you can redistribute it and/or modify it under the terms of the GNU General Public * License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later * version. * * OpenFISMA is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied * warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more * details. * * You should have received a copy of the GNU General Public License along with OpenFISMA. If not, see * {@link http://www.gnu.org/licenses/}. */ try { defined('APPLICATION_PATH') || define( 'APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application') ); // Define application environment defined('APPLICATION_ENV') || define( 'APPLICATION_ENV', (getenv('APPLICATION_ENV') ? getenv('APPLICATION_ENV') : 'production') ); set_include_path( APPLICATION_PATH . '/../library/Symfony/Components' . PATH_SEPARATOR . APPLICATION_PATH . '/../library' . PATH_SEPARATOR . get_include_path() ); require_once 'Fisma.php'; require_once 'Zend/Application.php'; $application = new Zend_Application( APPLICATION_ENV, APPLICATION_PATH . '/config/application.ini' ); Fisma::setAppConfig($application->getOptions()); Fisma::initialize(Fisma::RUN_MODE_WEB_APP); $application->bootstrap()->run(); } catch (Zend_Config_Exception $zce) { // A zend config exception indicates that the application may not be installed properly echo '<h1>The application is not installed correctly</h1>'; $zceMsg = $zce->getMessage(); if (stristr($zceMsg, 'parse_ini_file') !== false) { if (stristr($zceMsg, 'application.ini') !== false) { if (stristr($zceMsg, 'No such file or directory') !== false) { echo 'The ' . APPLICATION_PATH . '/config/application.ini file is missing.'; } elseif (stristr($zceMsg, 'Permission denied') !== false) { echo 'The ' . APPLICATION_PATH . '/config/application.ini file does not have the ' . 'appropriate permissions set for the application to read it.'; } else { echo 'An ini-parsing error has occured in ' . APPLICATION_PATH . '/config/application.ini ' . '<br/>Please check this file and make sure everything is setup correctly.'; } } else if (stristr($zceMsg, 'database.ini') !== false) { if (stristr($zceMsg, 'No such file or directory') !== false) { echo 'The ' . APPLICATION_PATH . '/config/database.ini file is missing.<br/>'; echo 'If you find a database.ini.template file in the config directory, edit this file ' . 'appropriately and rename it to database.ini'; } elseif (stristr($zceMsg, 'Permission denied') !== false) { echo 'The ' . APPLICATION_PATH . '/config/database.ini file does not have the appropriate ' . 'permissions set for the application to read it.'; } else { echo 'An ini-parsing error has occured in ' . APPLICATION_PATH . '/config/database.ini ' . '<br/>Please check this file and make sure everything is setup correctly.'; } } else { echo 'An ini-parsing error has occured. <br/>Please check all configuration files and make sure ' . 'everything is setup correctly'; } } elseif (stristr($zceMsg, 'syntax error') !== false) { if (stristr($zceMsg, 'application.ini') !== false) { echo 'There is a syntax error in ' . APPLICATION_PATH . '/config/application.ini ' . '<br/>Please check this file and make sure everything is setup correctly.'; } elseif (stristr($zceMsg, 'database.ini') !== false) { echo 'There is a syntax error in ' . APPLICATION_PATH . '/config/database.ini ' . '<br/>Please check this file and make sure everything is setup correctly.'; } else { echo 'A syntax error has been reached. <br/>Please check all configuration files and make sure ' . 'everything is setup correctly.'; } } else { // Then the exception message says nothing about parse_ini_file nor 'syntax error' echo 'Please check all configuration files, and ensure all settings are valid.'; } echo '<br/>For more information and help on installing OpenFISMA, please refer to the ' . '<a target="_blank" href="http://manual.openfisma.org/display/ADMIN/Installation">' . 'Installation Guide</a>'; } catch (Doctrine_Manager_Exception $dme) { echo '<h1>An exception occurred while bootstrapping the application.</h1>'; // Does database.ini have valid settings? Or is it the same content as database.ini.template? $databaseIniFail = false; $iniData = file(APPLICATION_PATH . '/config/database.ini'); $iniData = str_replace(chr(10), '', $iniData); if (in_array('db.adapter = ##DB_ADAPTER##', $iniData)) { $databaseIniFail = true; } if (in_array('db.host = ##DB_HOST##', $iniData)) { $databaseIniFail = true; } if (in_array('db.port = ##DB_PORT##', $iniData)) { $databaseIniFail = true; } if (in_array('db.username = ##DB_USER##', $iniData)) { $databaseIniFail = true; } if (in_array('db.password = ##DB_PASS##', $iniData)) { $databaseIniFail = true; } if (in_array('db.schema = ##DB_NAME##', $iniData)) { $databaseIniFail = true; } if ($databaseIniFail) { echo 'You have not applied the settings in ' . APPLICATION_PATH . '/config/database.ini appropriately. ' . 'Please review the contents of this file and try again.'; } else { if (Fisma::debug()) { echo '<p>' . get_class($dme) . '</p><p>' . $dme->getMessage() . '</p><p>' . "<p><pre>Stack Trace:\n" . $dme->getTraceAsString() . '</pre></p>'; } else { $logString = get_class($dme) . "\n" . $dme->getMessage() . "\nStack Trace:\n" . $dme->getTraceAsString() . "\n"; Zend_Registry::get('Zend_Log')->err($logString); } } } catch (Exception $exception) { // If a bootstrap exception occurs, that indicates a serious problem, such as a syntax error. // We won't be able to do anything except display an error. echo '<h1>An exception occurred while bootstrapping the application.</h1>'; if (Fisma::debug()) { echo '<p>' . get_class($exception) . '</p><p>' . $exception->getMessage() . '</p><p>' . "<p><pre>Stack Trace:\n" . $exception->getTraceAsString() . '</pre></p>'; } else { $logString = get_class($exception) . "\n" . $exception->getMessage() . "\nStack Trace:\n" . $exception->getTraceAsString() . "\n"; Zend_Registry::get('Zend_Log')->err($logString); } }

    Read the article

  • What is a good way to store tilemap data?

    - by Stephen Tierney
    I'm developing a 2D platformer with some uni friends. We've based it upon the XNA Platformer Starter Kit which uses .txt files to store the tile map. While this is simple it does not give us enough control and flexibility with level design. Some examples: for multiple layers of content multiple files are required, each object is fixed onto the grid, doesn't allow for rotation of objects, limited number of characters etc. So I'm doing some research into how to store the level data and map file. This concerns only the file system storage of the tile maps, not the data structure to be used by the game while it is running. The tile map is loaded into a 2D array, so this question is about which source to fill the array from. Reasoning for DB: From my perspective I see less redundancy of data using a database to store the tile data. Tiles in the same x,y position with the same characteristics can be reused from level to level. It seems like it would simple enough to write a method to retrieve all the tiles that are used in a particular level from the database. Reasoning for JSON/XML: Visually editable files, changes can be tracked via SVN a lot easier. But there is repeated content. Do either have any drawbacks (load times, access times, memory etc) compared to the other? And what is commonly used in the industry? Currently the file looks like this: .................... .................... .................... .................... .................... .................... .................... .........GGG........ .........###........ .................... ....GGG.......GGG... ....###.......###... .................... .1................X. #################### 1 - Player start point, X - Level Exit, . - Empty space, # - Platform, G - Gem

    Read the article

  • WebLogic Application Server: free for developers! by Bruno Borges

    - by JuergenKress
    Great news! Oracle WebLogic Server is now free for developers! What does this mean for you? That you as a developer are permitted to: "[...] deploy the programs only on your single developer desktop computer (of any type, including physical, virtual or remote virtual), to be used and accessed by only (1) named developer." But the most interesting part of the license change is this one: "You may continue to develop, test, prototype and demonstrate your application with the programs under this license after you have deployed the application for any internal data processing, commercial or production purposes" (Read the full license agreement here). If you want to take advantage of this licensing change and start developing Java EE applications with the #1 Application Server in the world, read now the previous post, How To Install WebLogic Zip on Linux! WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: WebLogic free,WebLogic for developers,WebLogic license,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Is there ever a reason to use C++ in a Mac-only application?

    - by Emil Eriksson
    Is there ever a reason to use C++ in a Mac-only application? I not talking about integrating external libraries which are C++, what I mean is using C++ because of any advantages in a particular application. While the UI code must be written in Obj-C, what about logic code? Because of the dynamic nature of Objective-C, C++ method calls tend to be ever so slightly faster but does this have any effect in any imaginable real life scenario? For example, would it make sense to use C++ over Objective-C for simulating large particle systems where some methods would need to be called over and over in short time? I can also see some cases where C++ has a more appropriate "feel". For example when doing graphics, it's nice to have vector and matrix types with appropriate operator overloads and methods. This, to me, seems like it would be a bit clunkier to implement in Objective-C. Also, Objective-C objects can never be treated plain old data structures in the same manner as C++ types since Objective-C objects always have an isa-pointer. Wouldn't it make sense to use C++ instead in something like this? Does anyone have a real life example of a situation where C++ was chosen for some parts of an application? Does Apple use any C++ except for the kernel? (I don't want to start a flame war here, both languages have their merits and I use both equally though in different applications.)

    Read the article

  • BizTalk and SQL: Alternatives to the SQL receive adapter. Using Msmq to receive SQL data

    - by Leonid Ganeline
    If we have to get data from the SQL database, the standard way is to use a receive port with SQL adapter. SQL receive adapter is a solicit-response adapter. It periodically polls the SQL database with queries. That’s only way it can work. Sometimes it is undesirable. With new WCF-SQL adapter we can use the lightweight approach but still with the same principle, the WCF-SQL adapter periodically solicits the database with queries to check for the new records. Imagine the situation when the new records can appear in very broad time limits, some - in a second interval, others - in the several minutes interval. Our requirement is to process the new records ASAP. That means the polling interval should be near the shortest interval between the new records, a second interval. As a result the most of the poll queries would return nothing and would load the database without good reason. If the database is working under heavy payload, it is very undesirable. Do we have other choices? Sure. We can change the polling to the “eventing”. The good news is the SQL server could issue the event in case of new records with triggers. Got a new record –the trigger event is fired. No new records – no the trigger events – no excessive load to the database. The bad news is the SQL Server doesn’t have intrinsic methods to send the event data outside. For example, we would rather use the adapters that do listen for the data and do not solicit. There are several such adapters-listeners as File, Ftp, SOAP, WCF, and Msmq. But the SQL Server doesn’t have methods to create and save files, to consume the Web-services, to create and send messages in the queue, does it? Can we use the File, FTP, Msmq, WCF adapters to get data from SQL code? Yes, we can. The SQL Server 2005 and 2008 have the possibility to use .NET code inside SQL code. See the SQL Integration. How it works for the Msmq, for example: ·         New record is created, trigger is fired ·         Trigger calls the CLR stored procedure and passes the message parameters to it ·         The CLR stored procedure creates message and sends it to the outgoing queue in the SQL Server computer. ·         Msmq service transfers message to the queue in the BizTalk Server computer. ·         WCF-NetMsmq adapter receives the message from this queue. For the File adapter the idea is the same, the CLR stored procedure creates and stores the file with message, and then the File adapter picks up this file. Using WCF-NetMsmq adapter to get data from SQL I am describing the full set of the deployment and development steps for the case with the WCF-NetMsmq adapter. Development: 1.       Create the .NET code: project, class and method to create and send the message to the MSMQ queue. 2.       Create the SQL code in triggers to call the .NET code. Installation and Deployment: 1.       SQL Server: a.       Register the CLR assembly with .NET (CLR) code b.      Install the MSMQ Services 2.       BizTalk Server: a.       Install the MSMQ Services b.      Create the MSMQ queue c.       Create the WCF-NetMsmq receive port. The detailed description is below. Code .NET code … using System.Xml; using System.Xml.Linq; using System.Xml.Serialization;   //namespace MyCompany.MySolution.MyProject – doesn’t work. The assembly name is MyCompany.MySolution.MyProject // I gave up with the compound namespace. Seems the CLR Integration cannot work with it L. Maybe I’m wrong.     public class Event     {         static public XElement CreateMsg(int par1, int par2, int par3)         {             XNamespace ns = "http://schemas.microsoft.com/Sql/2008/05/TypedPolling/my_storedProc";             XElement xdoc =                 new XElement(ns + "TypedPolling",                     new XElement(ns + "TypedPollingResultSet0",                         new XElement(ns + "TypedPollingResultSet0",                             new XElement(ns + "par1", par1),                             new XElement(ns + "par2", par2),                             new XElement(ns + "par3", par3),                         )                     )                 );             return xdoc;         }     }   //////////////////////////////////////////////////////////////////////// … using System.ServiceModel; using System.ServiceModel.Channels; using System.Transactions; using System.Data; using System.Data.Sql; using System.Data.SqlTypes;   public class MsmqHelper {     [Microsoft.SqlServer.Server.SqlProcedure]     // msmqAddress as "net.msmq://localhost/private/myapp.myqueue";     public static void SendMsg(string msmqAddress, string action, int par1, int par2, int par3)     {         using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Suppress))         {             NetMsmqBinding binding = new NetMsmqBinding(NetMsmqSecurityMode.None);             binding.ExactlyOnce = true;             EndpointAddress address = new EndpointAddress(msmqAddress);               using (ChannelFactory<IOutputChannel> factory = new ChannelFactory<IOutputChannel>(binding, address))             {                 IOutputChannel channel = factory.CreateChannel();                 try                 {                     XElement xe = Event.CreateMsg(par1, par2, par3);                     XmlReader xr = xe.CreateReader();                     Message msg = Message.CreateMessage(MessageVersion.Default, action, xr);                     channel.Send(msg);                     //SqlContext.Pipe.Send(…); // to test                 }                 catch (Exception ex)                 { …                 }             }             scope.Complete();         }     }   SQL code in triggers   -- sp_SendMsg was registered as a name of the MsmqHelper.SendMsg() EXEC sp_SendMsg'net.msmq://biztalk_server_name/private/myapp.myqueue', 'Create', @par1, @par2, @par3   Installation and Deployment On the SQL Server Registering the CLR assembly 1.       Prerequisites: .NET 3.5 SP1 Framework. It could be the issue for the production SQL Server! 2.       For more information, please, see the link http://nielsb.wordpress.com/sqlclrwcf/ 3.       Copy files: >copy “\Windows\Microsoft.net\Framework\v3.0\Windows Communication Foundation\Microsoft.Transactions.Bridge.dll” “\Program Files\Reference Assemblies\Microsoft\Framework\v3.0 \Microsoft.Transactions.Bridge.dll” If your machine is a 64-bit, run two commands: >copy “\Windows\Microsoft.net\Framework\v3.0\Windows Communication Foundation\Microsoft.Transactions.Bridge.dll” “\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.0 \Microsoft.Transactions.Bridge.dll” >copy “\Windows\Microsoft.net\Framework64\v3.0\Windows Communication Foundation\Microsoft.Transactions.Bridge.dll” “\Program Files\Reference Assemblies\Microsoft\Framework\v3.0 \Microsoft.Transactions.Bridge.dll” 4.       Execute the SQL code to register the .NET assemblies: -- For x64 OS: CREATE ASSEMBLY SMdiagnostics AUTHORIZATION dbo FROM 'C:\Windows\Microsoft.NET\Framework\v3.0\Windows Communication Foundation\SMdiagnostics.dll' WITH permission_set = unsafe CREATE ASSEMBLY [System.Web] AUTHORIZATION dbo FROM 'C:\Windows\Microsoft.NET\Framework64\v2.0.50727\System.Web.dll' WITH permission_set = unsafe CREATE ASSEMBLY [System.Messaging] AUTHORIZATION dbo FROM 'C:\Windows\Microsoft.NET\Framework\v2.0.50727\System.Messaging.dll' WITH permission_set = unsafe CREATE ASSEMBLY [System.ServiceModel] AUTHORIZATION dbo FROM 'C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.0\System.ServiceModel.dll' WITH permission_set = unsafe CREATE ASSEMBLY [System.Xml.Linq] AUTHORIZATION dbo FROM 'C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5\System.Xml.Linq.dll' WITH permission_set = unsafe   -- For x32 OS: --CREATE ASSEMBLY SMdiagnostics AUTHORIZATION dbo FROM 'C:\Windows\Microsoft.NET\Framework\v3.0\Windows Communication Foundation\SMdiagnostics.dll' WITH permission_set = unsafe --CREATE ASSEMBLY [System.Web] AUTHORIZATION dbo FROM 'C:\Windows\Microsoft.NET\Framework\v2.0.50727\System.Web.dll' WITH permission_set = unsafe --CREATE ASSEMBLY [System.Messaging] AUTHORIZATION dbo FROM 'C:\Windows\Microsoft.NET\Framework\v2.0.50727\System.Messaging.dll' WITH permission_set = unsafe --CREATE ASSEMBLY [System.ServiceModel] AUTHORIZATION dbo FROM 'C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.0\System.ServiceModel.dll' WITH permission_set = unsafe 5.       Register the assembly with the external stored procedure: CREATE ASSEMBLY [HelperClass] AUTHORIZATION dbo FROM ’<FilePath>MyCompany.MySolution.MyProject.dll' WITH permission_set = unsafe where the <FilePath> - the path of the file on this machine! 6. Create the external stored procedure CREATE PROCEDURE sp_SendMsg (        @msmqAddress nvarchar(100),        @Action NVARCHAR(50),        @par1 int,        @par2 int,        @par3 int ) AS EXTERNAL NAME HelperClear.MsmqHelper.SendMsg   Installing the MSMQ Services 1.       Check if the MSMQ service is NOT installed. To check:  Start / Administrative Tools / Computer Management, on the left pane open the “Services and Applications”, search to the “Message Queuing”. If you cannot see it, follow next steps. 2.       Start / Control Panel / Programs and Features 3.       Click “Turn Windows Features on or off” 4.       Click Features, click “Add Features” 5.       Scroll down the feature list; open the “Message Queuing” / “Message Queuing Services”; and check the “Message Queuing Server” option  6.       Click Next; Click Install; wait to the successful finish of the installation Creating the MSMQ queue We don’t need to create the queue on the “sender” side. On the BizTalk Server Installing the MSMQ Services The same is as for the SQL Server. Creating the MSMQ queue 1.       Start / Administrative Tools / Computer Management, on the left pane open the “Services and Applications”, open the “Message Queuing”, and open the “Private Queues”. 2.       Right-click the “Private Queues”; choose New; choose “Private Queue”. 3.       Type the Queue name as ’myapp.myqueue'; check the “Transactional” option. Creating the WCF-NetMsmq receive port I will not go through this step in all details. It is straightforward. URI for this receive location should be 'net.msmq://localhost/private/myapp.myqueue'. Notes ·         The biggest problem is usually on the step the “Registering the CLR assembly”. It is hard to predict where are the assemblies from the assembly list, what version should be used, x86 or x64. It is pity of such “rude” integration of the SQL with .NET. ·         In couple cases the new WCF-NetMsmq port was not able to work with the queue. Try to replace the WCF- NetMsmq port with the WCF-Custom port with netMsmqBinding. It was working fine for me. ·         To test how messages go through the queue you can turn on the Journal /Enabled option for the queue. I used the QueueExplorer utility to look to the messages in Journal. The Computer Management can also show the messages but it shows only small part of the message body and in the weird format. The QueueExplorer can do the better job; it shows the whole body and Xml messages are in good color format.

    Read the article

  • Breadcrumbs in a modern web application, make sense? [on hold]

    - by Xtreme Biker
    I'm currently beginning with the development of a new web application. The whole web application is going to be bookmarkable and all the pages accesible via GET requests and url parameters. Having said that, let's suppose I've got three entities in my application, Customer, Team and City. Each Customer and Team belong to a city and I've got a city-detail page which displays the detail for a concrete city. So next navigation cases are possible: Customers - Customer detail (id=2) - City detail (id=3) Football teams - Team detail (id=5) - City detail (id=3) Cities - City detail (id=3) There are three possible ways of ending up in a city detail view. My question is, does it make sense to implement a breadcrumb to show such a history, having it available in the browser itself? Would it be more appropiate to show a breadcrumb with the last case, no matter where we're coming from (hierarchical breadcrumb)? That's what Jakob Nielsen points out here: Offering users a Hansel-and-Gretel-style history trail is basically useless, because it simply duplicates functionality offered by the Back button, which is the Web’s second-most-used feature. A history trail can also be confusing: users often wander in circles or go to the wrong site sections. Having each point in a confused progression at the top of the current page doesn’t offer much help. Finally, a history trail is useless for users who arrive directly at a page deep within the site. Also, even if the history trail seems the most natural way to implement it, it requires an extra effort to keep the whole track being HTTP a stateless mean.

    Read the article

  • When running a .jar application with OpenJDK, my keyboard becomes unresponsive?

    - by Mochan
    I recently downloaded a Java application with the .jar format, and had it running on my computer not so long ago. Now that I'm using my desktop instead of laptop temporarily, I want it to run. On my laptop it was a tremendous hassle to get OpenJDK to even run the application without it going black, and now on my desktop I don't have that problem. However, when I run the application, my keyboard becomes unresponsive and doesn't type at all. This is a really big problem because it's demands the use of a keyboard. It works as normal on my laptop though, and it works perfectly. But now on the desktop its completely useless. I don't know if there's like a keyboard driver I'm missing, but there shouldn't be because the keyboard runs flawlessly everywhere else. I'm using OpenJDK 6 because the 7 has the same 'black screen' I mentioned, so I need this to work within OpenJDK 6. Thanks so much in advance and I'll try to specify as many details as I can. M

    Read the article

  • How does one find out which application is associated with an indicator icon in Ubuntu 12.04?

    - by Amos Annoy
    It is trivial to do this in Ubuntu 10.04. The question is specific to Ubuntu 12.04. This has nothing to do (does it?) with right click. How can an indicator's icon in Ubuntu 12.04 be matched with the program responsible for it's manifestation on the top panel? A list of running applications can include all processes using System Monitor. How is the correct matching process found for an indicator? (the examination of SM points out a rather poignant factor in the faster depletion and shortened run time on battery - the ambient quiescent CPU rate in 12.04 is now well over 20% when previously it was well under 10% in 10.04, between 5% and 7%!) (I have a problem with the battery indicator - it sometimes has % and other times hh:mm - it is necessary to know the ap. & v. to get more info on controlling same. ditto: There are issues with other indicator aps.) Details from: How can I find Application Indicator ID's? suggests looking at: file:///usr/share/indicator-application/ordering-override.keyfile [Ordering Index Overrides] nm-applet=1 gnome-power-manager=2 ibus=3 gst-keyboard-xkb=4 gsd-keyboard-xkb=5 which solves the battery identification, and presumably nm is NetworkManager for the rf icon, but the envelope, blue tooth and speaker indicator aps. are still a mystery. (Also, the ordering is not correlated.) Mind you, it was simple in the past to simply right click to get the About option to find the ap. & v. info. browsing around and about: file:///usr/share/indicator-application/ordering-override.keyfile examined: file:///usr/share/indicators file:///usr/share/indicators/messages/applications/ ... perhaps?/presumably? the information sought may be buried in file:///usr/share/indicators

    Read the article

  • How to decide how backward-compatible my new Mac OS X application should be?

    - by haimg
    I'm currently contemplating writing an OS X version of my Windows software. My Windows application still supports Windows XP, and I know that if I drop support for it now, our customers will cry bloody murder. I'm new to OS X development, and as I learn the technology, APIs, etc., I realized that if I'm going to provide comparable level of backward compatibility (e.g. down to OS X 10.5), I would not be able to use many things that look very useful and relevant in my case (ARC, XPC communications, many others). This is quite different from Windows, in my opinion, where there are very little changed between Windows XP and Windows 7 from desktop application developer's standpoint. So, on one hand, it seems like a complete waste to stick to 2007 or 2009-level API in 2012. On the other hand, according to NetMarketShare report and Stat Owl report Mac OS X 10.5 and 10.6 market share is still 11% and 35%-40% respectively. However, I'm not sure if these older OS users are my target audience (buyers of software utilities) if they didn't bother to upgrade their OS... My question: Are there any other reasons I should take into account when deciding if I target 10.5 or 10.6 or 10.7 for a new application?

    Read the article

  • How to package static content outside of web application?

    - by chinto
    Our web application has static content packaged as part of WAR. We have been planning to move it out of the project and host it directly on Apache to achieve the following objectives. It's getting too big and bloating the EAR size resulting in slower deployment across nodes. Faster deployment times. Take the load of Application Server Host the static content on a sub domain allowing some browsers (IE) to load resources simultaneously Give us an option to use further caching such as Apache mod_cache apart from the cache headers we send out to browsers. We use yuicompressor-maven-plugin to aggregate and minimize JS file. My question is how do package and manage this static content out side of the web application? My current options are. New maven war project. Still use the same plugin for aggregation and compression. Just a plain directory in SVN and use YUI/Google compressor directly. Or is there a better technology out there to manage static content as a project?

    Read the article

  • Welcome!

    - by mannamal
    Welcome to the Oracle Big Data Connectors blog, which will focus on posts related to integrating data on a Hadoop cluster with Oracle Database. In particular the blog will focus on best practices, usage notes, and performance tips for using Oracle Loader for Hadoop and Oracle Direct Connector for HDFS, which are part of Oracle Big Data Connectors. Oracle Big Data Connectors 1.0 also includes Oracle R Connector for Hadoop and Oracle Data Integrator Application Adapters for Hadoop. Oracle Loader for Hadoop: Oracle Loader for Hadoop loads data from Hadoop to Oracle Database. It runs as a MapReduce job on Hadoop to partition, sort, and convert the data into an Oracle-ready format, offloading to Hadoop the processing that is typically done using database CPUs. The data is thenloaded to the database by the Oracle Loader for Hadoop job (online load) or written out as Oracle Data Pump files for load and access later (offline load) with Oracle Direct Connector for HDFS. Oracle Direct Connector for HDFS: Oracle Direct Connector for HDFS is a connector for high speed access of data on HDFS from Oracle Database. With this connector Oracle SQL can be used to directly query data on HDFS. The data can be Oracle Data Pump files generated by Oracle Loader for Hadoop or delimited text files. The connector can also be used to load data into the database using SQL.

    Read the article

  • Is it reasonable to require passwords when users sign into my application through social media accounts?

    - by BrMcMullin
    I've built an application that requires users to authenticate with one or more social media accounts from either Facebook, Twitter, or LinkedIn. Edit Once the user has signed in, an 'identity' for them is maintained in the system, to which all content they create is associated. A user can associate one account from each of the supported providers with this identity. I'm concerned about how to protect potential users from connecting the wrong account to their identity in our application. /Edit There are two main scenarios that could happen: User has multiple accounts on one of the three providers, and is not logged into the one s/he desires. User comes to a public or shared computer, in which the previous user left themselves logged into one of the three providers. While I haven't encountered many examples of this myself, I'm considering requiring users to password authenticate with Facebook, Twitter, and LinkedIn whenever they are signing into our application. Is that a reasonable approach, or are there reasons why many other sites and applications don't challenge users to provide a user name and password when authorizing applications to access their social media accounts? Thanks in advance! Edit A clarification, I'm not intending to store anyone's user name and password. Rather, when a user clicks the button to sign in, with Facebook as an example, I'm considering showing an "Is this you?" type window. The idea is that a user would respond to the challenge by either signing into Facebook on the account fetched from the oauth hash, or would sign into the correct account and the oauth callback would run with the new oauth hash data.

    Read the article

  • Google Analytics API data for goals (funnels) doesn't match - how do they reconcile?

    - by bkgraham
    I have a Google Analytics account with a well-functioning funnel made up of 4 goals. I can query the API and get the data out, but it does not match the funnel report in Analytics. Without getting into specific values, I can give you an example with faked data. Here's how the funnel might look: Shopping Cart 100 > 100 > 20 80 (80%) Address Page 5 > 85 > 25 60 (71%) Payment Page 2 > 62 > 10 52 (84%) Checkout 1 > 53 (49.07% funnel conversion rate) Okay, so you would expect the API to output data something like this: goal1Starts goal1Completions goal1Abandons 100 80 20 goal2Starts goal2Completions goal2Abandons 85 60 25 goal3Starts goal3Completions goal3Abandons 62 52 10 goal4Starts goal4Completions goal4Abandons 53 53 0 Instead, it's different. Firstly, the abandons are associated with the following goal (so goal1 always has 0 abandons and goal4 always has 0 abandons. Okay, I can work with that. What's confusing is that the numbers are always a little different. The goal1Completions always match the report, as do the goal4Completions, but everything else is off by a small amount. Sometimes it's only 2 visits, other times it's off by 50. For the report above here's the kind of results I would tend to get: goal1Starts goal1Completions goal1Abandons 100 100 0 goal2Starts goal2Completions goal2Abandons 105 84 21 goal3Starts goal3Completions goal3Abandons 90 65 25 goal4Starts goal4Completions goal4Abandons 58 53 5 Here's what I know: Goal(n)Completions + Goal(n)Abandons = Goal(n)Starts Goal(n)Starts = Goal(n-1)Completions Goal(n)Starts - Goal(n-1)Completions != reported number entering at that level That third one is particularly disappointing. So, here's my question: What data do I need to pull from the API in order to recreate the counts in the Funnel report in Google Analytics? I don't need the pages exited to entering from - just the counts at every level.

    Read the article

  • Oracle's Global Single Schema

    - by david.butler(at)oracle.com
    Maximizing business process efficiencies in a heterogeneous environment is very difficult. The difficulty stems from the fact that the various applications across the Information Technology (IT) landscape employ different integration standards, different message passing strategies, and different workflow engines. Vendors such as Oracle and others are delivering tools to help IT organizations manage the complexities introduced by these differences. But the one remaining intractable problem impacting efficient operations is the fact that these applications have different definitions for the same business data. Business data is your business information codified for computer programs to use. A good data model will represent the way your organization does business. The computer applications your organization deploys to improve operational efficiency are built to operate on the business data organized into this schema.  If the schema does not represent how you do business, the applications on that schema cannot provide the features you need to achieve the desired efficiencies. Business processes span these applications. Data problems break these processes rendering them far less efficient than they need to be to achieve organization goals. Thus, the expected return on the investment in these applications is never realized. The success of all business processes depends on the availability of accurate master data.  Clearly, the solution to this problem is to consolidate all the master data an organization uses to run its business. Then clean it up, augment it, govern it, and connect it back to the applications that need it. Until now, this obvious solution has been difficult to achieve because no one had defined a data model sufficiently broad, deep and flexible enough to support transaction processing on all key business entities and serve as a master superset to all other operational data models deployed in heterogeneous IT environments. Today, the situation has changed. Oracle has created an operational data model (aka schema) that can support accurate and consistent master data across heterogeneous IT systems. This is foundational for providing a way to consolidate and integrate master data without having to replace investments in existing applications. This Global Single Schema (GSS) represents a revolutionary breakthrough that allows for true master data consolidation. Oracle has deep knowledge of applications dating back to the early 1990s.  It developed applications in the areas of Supply Chain Management (SCM), Product Lifecycle Management (PLM), Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), Human Capital Management (HCM), Financials and Manufacturing. In addition, Oracle applications were delivered for key industries such as Communications, Financial Services, Retail, Public Sector, High Tech Manufacturing (HTM) and more. Expertise in all these areas drove requirements for GSS. The following figure illustrates Oracle's unique position that enabled the creation of the Global Single Schema. GSS Requirements Gathering GSS defines all the key business entities and attributes including Customers, Contacts, Suppliers, Accounts, Products, Services, Materials, Employees, Installed Base, Sites, Assets, and Inventory to name just a few. In addition, Oracle delivers GSS pre-integrated with a wide variety of operational applications.  Business Process Automation EBusiness is about maximizing operational efficiency. At the highest level, these 'operations' span all that you do as an organization.  The following figure illustrates some of these high-level business processes. Enterprise Business Processes Supplies are procured. Assets are maintained. Materials are stored. Inventory is accumulated. Products and Services are engineered, produced and sold. Customers are serviced. And across this entire spectrum, Employees do the procuring, supporting, engineering, producing, selling and servicing. Not shown, but not to be overlooked, are the accounting and the financial processes associated with all this procuring, manufacturing, and selling activity. Supporting all these applications is the master data. When this data is fragmented and inconsistent, the business processes fail and inefficiencies multiply. But imagine having all the data under these operational business processes in one place. ·            The same accurate and timely customer data will be provided to all your operational applications from the call center to the point of sale. ·            The same accurate and timely supplier data will be provided to all your operational applications from supply chain planning to procurement. ·            The same accurate and timely product information will be available to all your operational applications from demand chain planning to marketing. You would have a single version of the truth about your assets, financial information, customers, suppliers, employees, products and services to support your business automation processes as they flow across your business applications. All company and partner personnel will access the same exact data entity across all your channels and across all your lines of business. Oracle's Global Single Schema enables this vision of a single version of the truth across the heterogeneous operational applications supporting the entire enterprise. Global Single Schema Oracle's Global Single Schema organizes hundreds of thousands of attributes into 165 major schema objects supporting over 180 business application modules. It is designed for international operations, and extensibility.  The schema is delivered with a full set of public Application Programming Interfaces (APIs) and an Integration Repository with modern Service Oriented Architecture interfaces to make data available as a services (DaaS) to business processes and enable operations in heterogeneous IT environments. ·         Key tables can be extended with unlimited numbers of additional attributes and attribute groups for maximum flexibility.  o    This enables model extensions that reflect business entities unique to your organization's operations. ·         The schema is multi-organization enabled so data manipulation can be controlled along organizational boundaries. ·         It uses variable byte Unicode to support over 31 languages. ·         The schema encodes flexible date and flexible address formats for easy localizations. No matter how complex your business is, Oracle's Global Single Schema can hold your business objects and support your global operations. Oracle's Global Single Schema identifies and defines the business objects an enterprise needs within the context of its business operations. The interrelationships between the business objects are also contained within the GSS data model. Their presence expresses fundamental business rules for the interaction between business entities. The following figure illustrates some of these connections.   Interconnected Business Entities Interconnecte business processes require interconnected business data. No other MDM vendor has this capability. Everyone else has either one entity they can master or separate disconnected models for various business entities. Higher level integrations are made available, but that is a weak architectural alternative to data level integration in this critically important aspect of Master Data Management.    

    Read the article

  • To SYNC or not to SYNC – Part 3

    - by AshishRay
    I can't believe it has been almost a year since my last blog post. I know, that's an absolute no-no in the blogosphere. And I know that "I have been busy" is not a good excuse. So - without trying to come up with an excuse - let me state this - my apologies for taking such a long time to write the next Part. Without further ado, here goes. This is Part 3 of a multi-part blog article where we are discussing various aspects of setting up Data Guard synchronous redo transport (SYNC). In Part 1 of this article, I debunked the myth that Data Guard SYNC is similar to a two-phase commit operation. In Part 2, I discussed the various ways that network latency may or may not impact a Data Guard SYNC configuration. In this article, I will talk in details regarding why Data Guard SYNC is a good thing. I will also talk about distance implications for setting up such a configuration. So, Why Good? Why is Data Guard SYNC a good thing? Because, at the end of the day, this gives you the assurance of zero data loss - it doesn’t matter what outage may befall your primary system. Befall! Boy, that sounds theatrical. But seriously - think about this - it minimizes your data risks. That’s a big deal. Whether you have an outage due to bad disks, faulty hardware components, hardware / software bugs, physical data corruptions, power failures, lightning that takes out significant part of your data center, fire that melts your assets, water leakage from the cooling system, human errors such as accidental deletion of online redo log files - it doesn’t matter - you can have that “Om - peace” look on your face and then you can failover to the standby system, without losing a single bit of data in your Oracle database. You will be a hero, as shown in this not so imaginary conversation: IT Manager: Well, what’s the status? You: John is doing the trace analysis on the storage array. IT Manager: So? How long is that gonna take? You: Well, he is stuck, waiting for a response from <insert your not-so-favorite storage vendor here>. IT Manager: So, no root cause yet? You: I told you, he is stuck. We have escalated with their Support, but you know how long these things take. IT Manager: Darn it - the site is down! You: Not really … IT Manager: What do you mean? You: John is stuck, but Sreeni has already done a failover to the Data Guard standby. IT Manager: Whoa, whoa - wait! Failover means we lost some data, why did you do this without letting the Business group know? You: We didn’t lose any data. Remember, we had set up Data Guard with SYNC? So now, any problems on the production – we just failover. No data loss, and we are up and running in minutes. The Business guys don’t need to know. IT Manager: Wow! Are we great or what!! You: I guess … Ok, so you get it - SYNC is good. But as my dear friend Larry Carpenter says, “TANSTAAFL”, or "There ain't no such thing as a free lunch". Yes, of course - investing in Data Guard SYNC means that you have to invest in a low-latency network, you have to monitor your applications and database especially in peak load conditions, and you cannot under-provision your standby systems. But all these are good and necessary things, if you are supporting mission-critical apps that are supposed to be running 24x7. The peace of mind that this investment will give you is priceless, especially if you are serious about HA. How Far Can We Go? Someone may say at this point - well, I can’t use Data Guard SYNC over my coast-to-coast deployment. Most likely - true. So how far can you go? Well, we have customers who have deployed Data Guard SYNC over 300+ miles! Does this mean that you can also deploy over similar distances? Duh - no! I am going to say something here that most IT managers don’t like to hear - “It depends!” It depends on your application design, application response time / throughput requirements, network topology, etc. However, because of the optimal way we do SYNC, customers have been able to stretch Data Guard SYNC deployments over longer distances compared to traditional, storage-centric ways of doing this. The MAA Database 10.2 best practices paper Data Guard Redo Transport & Network Configuration, and Oracle Database 11.2 High Availability Best Practices Manual talk about some of these SYNC-related metrics. For example, a test deployment of Data Guard SYNC over 330 miles with 10ms latency showed an impact less than 5% for a busy OLTP application. Even if you can’t deploy Data Guard SYNC over your WAN distance, or if you already have an ASYNC standby located 1000-s of miles away, here’s another nifty way to boost your HA. Have a local standby, configured SYNC. How local is “local”? Again - it depends. One customer runs a local SYNC standby across the campus. Another customer runs it across 15 miles in another data center. Both of these customers are running Data Guard SYNC as their HA standard. If a localized outage affects their primary system, no problem! They have all the data available on the standby, to which they can failover. Very fast. In seconds. Wait - did I say “seconds”? Yes, Virginia, there is a Santa Claus. But you have to wait till the next blog article to find out more. I assure you tho’ that this time you won’t have to wait for another year for this.

    Read the article

< Previous Page | 653 654 655 656 657 658 659 660 661 662 663 664  | Next Page >