Search Results

Search found 42428 results on 1698 pages for 'database query'.

Page 794/1698 | < Previous Page | 790 791 792 793 794 795 796 797 798 799 800 801  | Next Page >

  • MySQL Cluster 7.3 Labs Release – Foreign Keys Are In!

    - by Mat Keep
    0 0 1 1097 6254 Homework 52 14 7337 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Summary (aka TL/DR): Support for Foreign Key constraints has been one of the most requested feature enhancements for MySQL Cluster. We are therefore extremely excited to announce that Foreign Keys are part of the first Labs Release of MySQL Cluster 7.3 – available for download, evaluation and feedback now! (Select the mysql-cluster-7.3-labs-June-2012 build) In this blog, I will attempt to discuss the design rationale, implementation, configuration and steps to get started in evaluating the first MySQL Cluster 7.3 Labs Release. Pace of Innovation It was only a couple of months ago that we announced the General Availability (GA) of MySQL Cluster 7.2, delivering 1 billion Queries per Minute, with 70x higher cross-shard JOIN performance, Memcached NoSQL key-value API and cross-data center replication.  This release has been a huge hit, with downloads and deployments quickly reaching record levels. The announcement of the first MySQL Cluster 7.3 Early Access lab release at today's MySQL Innovation Day event demonstrates the continued pace in Cluster development, and provides an opportunity for the community to evaluate and feedback on new features they want to see. What’s the Plan for MySQL Cluster 7.3? Well, Foreign Keys, as you may have gathered by now (!), and this is the focus of this first Labs Release. As with MySQL Cluster 7.2, we plan to publish a series of preview releases for 7.3 that will incrementally add new candidate features for a final GA release (subject to usual safe harbor statement below*), including: - New NoSQL APIs; - Features to automate the configuration and provisioning of multi-node clusters, on premise or in the cloud; - Performance and scalability enhancements; - Taking advantage of features in the latest MySQL 5.x Server GA. Design Rationale MySQL Cluster is designed as a “Not-Only-SQL” database. It combines attributes that enable users to blend the best of both relational and NoSQL technologies into solutions that deliver web scalability with 99.999% availability and real-time performance, including: Concurrent NoSQL and SQL access to the database; Auto-sharding with simple scale-out across commodity hardware; Multi-master replication with failover and recovery both within and across data centers; Shared-nothing architecture with no single point of failure; Online scaling and schema changes; ACID compliance and support for complex queries, across shards. Native support for Foreign Key constraints enables users to extend the benefits of MySQL Cluster into a broader range of use-cases, including: - Packaged applications in areas such as eCommerce and Web Content Management that prescribe databases with Foreign Key support. - In-house developments benefiting from Foreign Key constraints to simplify data models and eliminate the additional application logic needed to maintain data consistency and integrity between tables. Implementation The Foreign Key functionality is implemented directly within MySQL Cluster’s data nodes, allowing any client API accessing the cluster to benefit from them – whether using SQL or one of the NoSQL interfaces (Memcached, C++, Java, JPA or HTTP/REST.) The core referential actions defined in the SQL:2003 standard are implemented: CASCADE RESTRICT NO ACTION SET NULL In addition, the MySQL Cluster implementation supports the online adding and dropping of Foreign Keys, ensuring the Cluster continues to serve both read and write requests during the operation. An important difference to note with the Foreign Key implementation in InnoDB is that MySQL Cluster does not support the updating of Primary Keys from within the Data Nodes themselves - instead the UPDATE is emulated with a DELETE followed by an INSERT operation. Therefore an UPDATE operation will return an error if the parent reference is using a Primary Key, unless using CASCADE action, in which case the delete operation will result in the corresponding rows in the child table being deleted. The Engineering team plans to change this behavior in a subsequent preview release. Also note that when using InnoDB "NO ACTION" is identical to "RESTRICT". In the case of MySQL Cluster “NO ACTION” means “deferred check”, i.e. the constraint is checked before commit, allowing user-defined triggers to automatically make changes in order to satisfy the Foreign Key constraints. Configuration There is nothing special you have to do here – Foreign Key constraint checking is enabled by default. If you intend to migrate existing tables from another database or storage engine, for example from InnoDB, there are a couple of best practices to observe: 1. Analyze the structure of the Foreign Key graph and run the ALTER TABLE ENGINE=NDB in the correct sequence to ensure constraints are enforced 2. Alternatively drop the Foreign Key constraints prior to the import process and then recreate when complete. Getting Started Read this blog for a demonstration of using Foreign Keys with MySQL Cluster.  You can download MySQL Cluster 7.3 Labs Release with Foreign Keys today - (select the mysql-cluster-7.3-labs-June-2012 build) If you are new to MySQL Cluster, the Getting Started guide will walk you through installing an evaluation cluster on a singe host (these guides reflect MySQL Cluster 7.2, but apply equally well to 7.3) Post any questions to the MySQL Cluster forum where our Engineering team will attempt to assist you. Post any bugs you find to the MySQL bug tracking system (select MySQL Cluster from the Category drop-down menu) And if you have any feedback, please post them to the Comments section of this blog. Summary MySQL Cluster 7.2 is the GA, production-ready release of MySQL Cluster. This first Labs Release of MySQL Cluster 7.3 gives you the opportunity to preview and evaluate future developments in the MySQL Cluster database, and we are very excited to be able to share that with you. Let us know how you get along with MySQL Cluster 7.3, and other features that you want to see in future releases. * Safe Harbor Statement This information is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

    Read the article

  • How to register a QueryExecuterFactory in jasper reports in java

    - by Bosso
    Hi, I implemented JRQueryExecuter and want to use it in a report. Using iReport I managed to register and use the executer. Now I want to use it in a java application, but I can't find any resources how to register the factory. I thought it's enough to have the implementation in the classpath, but I get the following exception: Caused by: net.sf.jasperreports.engine.JRException: No query executer factory class registered for tmql queries. at net.sf.jasperreports.engine.query.DefaultQueryExecuterFactoryBundle.getQueryExecuterFactory(DefaultQueryExecuterFactoryBundle.java:80) at net.sf.jasperreports.engine.util.JRQueryExecuterUtils.getQueryExecuterFactory(JRQueryExecuterUtils.java:57) at net.sf.jasperreports.engine.design.JRDesignDataset.queryLanguageChanged(JRDesignDataset.java:1006) Can anybody give me a hint? regards Hannes

    Read the article

  • A New Threat To Web Applications: Connection String Parameter Pollution (CSPP)

    - by eric.maurice
    Hi, this is Shaomin Wang. I am a security analyst in Oracle's Security Alerts Group. My primary responsibility is to evaluate the security vulnerabilities reported externally by security researchers on Oracle Fusion Middleware and to ensure timely resolution through the Critical Patch Update. Today, I am going to talk about a serious type of attack: Connection String Parameter Pollution (CSPP). Earlier this year, at the Black Hat DC 2010 Conference, two Spanish security researchers, Jose Palazon and Chema Alonso, unveiled a new class of security vulnerabilities, which target insecure dynamic connections between web applications and databases. The attack called Connection String Parameter Pollution (CSPP) exploits specifically the semicolon delimited database connection strings that are constructed dynamically based on the user inputs from web applications. CSPP, if carried out successfully, can be used to steal user identities and hijack web credentials. CSPP is a high risk attack because of the relative ease with which it can be carried out (low access complexity) and the potential results it can have (high impact). In today's blog, we are going to first look at what connection strings are and then review the different ways connection string injections can be leveraged by malicious hackers. We will then discuss how CSPP differs from traditional connection string injection, and the measures organizations can take to prevent this kind of attacks. In web applications, a connection string is a set of values that specifies information to connect to backend data repositories, in most cases, databases. The connection string is passed to a provider or driver to initiate a connection. Vendors or manufacturers write their own providers for different databases. Since there are many different providers and each provider has multiple ways to make a connection, there are many different ways to write a connection string. Here are some examples of connection strings from Oracle Data Provider for .Net/ODP.Net: Oracle Data Provider for .Net / ODP.Net; Manufacturer: Oracle; Type: .NET Framework Class Library: - Using TNS Data Source = orcl; User ID = myUsername; Password = myPassword; - Using integrated security Data Source = orcl; Integrated Security = SSPI; - Using the Easy Connect Naming Method Data Source = username/password@//myserver:1521/my.server.com - Specifying Pooling parameters Data Source=myOracleDB; User Id=myUsername; Password=myPassword; Min Pool Size=10; Connection Lifetime=120; Connection Timeout=60; Incr Pool Size=5; Decr Pool Size=2; There are many variations of the connection strings, but the majority of connection strings are key value pairs delimited by semicolons. Attacks on connection strings are not new (see for example, this SANS White Paper on Securing SQL Connection String). Connection strings are vulnerable to injection attacks when dynamic string concatenation is used to build connection strings based on user input. When the user input is not validated or filtered, and malicious text or characters are not properly escaped, an attacker can potentially access sensitive data or resources. For a number of years now, vendors, including Oracle, have created connection string builder class tools to help developers generate valid connection strings and potentially prevent this kind of vulnerability. Unfortunately, not all application developers use these utilities because they are not aware of the danger posed by this kind of attacks. So how are Connection String parameter Pollution (CSPP) attacks different from traditional Connection String Injection attacks? First, let's look at what parameter pollution attacks are. Parameter pollution is a technique, which typically involves appending repeating parameters to the request strings to attack the receiving end. Much of the public attention around parameter pollution was initiated as a result of a presentation on HTTP Parameter Pollution attacks by Stefano Di Paola and Luca Carettoni delivered at the 2009 Appsec OWASP Conference in Poland. In HTTP Parameter Pollution attacks, an attacker submits additional parameters in HTTP GET/POST to a web application, and if these parameters have the same name as an existing parameter, the web application may react in different ways depends on how the web application and web server deal with multiple parameters with the same name. When applied to connections strings, the rule for the majority of database providers is the "last one wins" algorithm. If a KEYWORD=VALUE pair occurs more than once in the connection string, the value associated with the LAST occurrence is used. This opens the door to some serious attacks. By way of example, in a web application, a user enters username and password; a subsequent connection string is generated to connect to the back end database. Data Source = myDataSource; Initial Catalog = db; Integrated Security = no; User ID = myUsername; Password = XXX; In the password field, if the attacker enters "xxx; Integrated Security = true", the connection string becomes, Data Source = myDataSource; Initial Catalog = db; Integrated Security = no; User ID = myUsername; Password = XXX; Intergrated Security = true; Under the "last one wins" principle, the web application will then try to connect to the database using the operating system account under which the application is running to bypass normal authentication. CSPP poses serious risks for unprepared organizations. It can be particularly dangerous if an Enterprise Systems Management web front-end is compromised, because attackers can then gain access to control panels to configure databases, systems accounts, etc. Fortunately, organizations can take steps to prevent this kind of attacks. CSPP falls into the Injection category of attacks like Cross Site Scripting or SQL Injection, which are made possible when inputs from users are not properly escaped or sanitized. Escaping is a technique used to ensure that characters (mostly from user inputs) are treated as data, not as characters, that is relevant to the interpreter's parser. Software developers need to become aware of the danger of these attacks and learn about the defenses mechanism they need to introduce in their code. As well, software vendors need to provide templates or classes to facilitate coding and eliminate developers' guesswork for protecting against such vulnerabilities. Oracle has introduced the OracleConnectionStringBuilder class in Oracle Data Provider for .NET. Using this class, developers can employ a configuration file to provide the connection string and/or dynamically set the values through key/value pairs. It makes creating connection strings less error-prone and easier to manager, and ultimately using the OracleConnectionStringBuilder class provides better security against injection into connection strings. For More Information: - The OracleConnectionStringBuilder is located at http://download.oracle.com/docs/cd/B28359_01/win.111/b28375/OracleConnectionStringBuilderClass.htm - Oracle has developed a publicly available course on preventing SQL Injections. The Server Technologies Curriculum course "Defending Against SQL Injection Attacks!" is located at http://st-curriculum.oracle.com/tutorial/SQLInjection/index.htm - The OWASP web site also provides a number of useful resources. It is located at http://www.owasp.org/index.php/Main_Page

    Read the article

  • Announcing: Improvements to the Windows Azure Portal

    - by ScottGu
    Earlier today we released a number of enhancements to the new Windows Azure Management Portal.  These new capabilities include: Service Bus Management and Monitoring Support for Managing Co-administrators Import/Export support for SQL Databases Virtual Machine Experience Enhancements Improved Cloud Service Status Notifications Media Services Monitoring Support Storage Container Creation and Access Control Support All of these improvements are now live in production and available to start using immediately.  Below are more details on them: Service Bus Management and Monitoring The new Windows Azure Management Portal now supports Service Bus management and monitoring. Service Bus provides rich messaging infrastructure that can sit between applications (or between cloud and on-premise environments) and allow them to communicate in a loosely coupled way for improved scale and resiliency. With the new Service Bus experience, you can now create and manage Service Bus Namespaces, Queues, Topics, Relays and Subscriptions. You can also get rich monitoring for Service Bus Queues, Topics and Subscriptions. To create a Service Bus namespace, you can now select the “Service Bus” tab in the Windows Azure portal and then simply select the CREATE command: Doing so will bring up a new “Create a Namespace” dialog that allows you to name and create a new Service Bus Namespace: Once created, you can obtain security credentials associated with the Namespace via the ACCESS KEY command. This gives you the ability to obtain the connection string associated with the service namespace. You can copy and paste these values into any application that requires these credentials: It is also now easy to create Service Bus Queues and Topics via the NEW experience in the portal drawer.  Simply click the NEW command and navigate to the “App Services” category to create a new Service Bus entity: Once you provision a new Queue or Topic it can be managed in the portal.  Clicking on a namespace will display all queues and topics within it: Clicking on an item in the list will allow you to drill down into a dashboard view that allows you to monitor the activity and traffic within it, as well as perform operations on it. For example, below is a view of an “orders” queue – note how we now surface both the incoming and outgoing message flow rate, as well as the total queue length and queue size: To monitor pub/sub subscriptions you can use the ADD METRICS command within a topic and select a specific subscription to monitor. Support for Managing Co-Administrators You can now add co-administrators for your Windows Azure subscription using the new Windows Azure Portal. This allows you to share management of your Windows Azure services with other users. Subscription co-administrators share the same administrative rights and permissions that service administrator have - except a co-administrator cannot change or view billing details about the account, nor remove the service administrator from a subscription. In the SETTINGS section, click on the ADMINISTRATORS tab, and select the ADD button to add a co-administrator to your subscription: To add a co-administrator, you specify the email address for a Microsoft account (formerly Windows Live ID) or an organizational account, and choose the subscription you want to add them to: You can later update the subscriptions that the co-administrator has access to by clicking on the EDIT button, and then select or deselect the subscriptions to which they belong. Import/Export Support for SQL Databases The Windows Azure administration portal now supports importing and exporting SQL Databases to/from Blob Storage.  Databases can be imported/exported to blob storage using the same BACPAC file format that is supported with SQL Server 2012.  Among other benefits, this makes it easy to copy and migrate databases between on-premise and cloud environments. SQL Databases now have an EXPORT command in the bottom drawer that when pressed will prompt you to save your database to a Windows Azure storage container: The UI allows you to choose an existing storage account or create a new one, as well as the name of the BACPAC file to persist in blob storage: You can also now import and create a new SQL Database by using the NEW command.  This will prompt you to select the storage container and file to import the database from: The Windows Azure Portal enables you to monitor the progress of import and export operations. If you choose to log out of the portal, you can come back later and check on the status of all of the operations in the new history tab of the SQL Database server – this shows your entire import and export history and the status (success/fail) of each: Enhancements to the Virtual Machine Experience One of the common pain-points we have heard from customers using the preview of our new Virtual Machine support has been the inability to delete the associated VHDs when a VM instance (or VM drive) gets deleted. Prior to today’s release the VHDs would continue to be in your storage account and accumulate storage charges. You can now navigate to the Disks tab within the Virtual Machine extension, select a VM disk to delete, and click the DELETE DISK command: When you click the DELETE DISK button you have the option to delete the disk + associated .VHD file (completely clearing it from storage).  Alternatively you can delete the disk but still retain a .VHD copy of it in storage. Improved Cloud Service Status Notifications The Windows Azure portal now exposes more information of the health status of role instances.  If any of the instances are in a non-running state, the status at the top of the dashboard will summarize the status (and update automatically as the role health changes): Clicking the instance hyperlink within this status summary view will navigate you to a detailed role instance view, and allow you to get more detailed health status of each of the instances.  The portal has been updated to provide more specific status information within this detailed view – giving you better visibility into the health of your app: Monitoring Support for Media Services Windows Azure Media Services allows you to create media processing jobs (for example: encoding media files) in your Windows Azure Media Services account. In the Windows Azure Portal, you can now monitor the number of encoding jobs that are queued up for processing as well as active, failed and queued tasks for encoding jobs. On your media services account dashboard, you can visualize the monitoring data for last 6 hours, 24 hours or 7 days. Storage Container Creation and Access Control Support You can now create Windows Azure Storage storage containers from within the Windows Azure Portal.  After selecting a storage account, you can navigate to the CONTAINERS tab and click the ADD CONTAINER command: This will display a dialog that lets you name the new container and control access to it: You can also update the access setting as well as container metadata of existing containers by selecting one and then using the new EDIT CONTAINER command: This will then bring up the edit container dialog that allows you to change and save its settings: In addition to creating and editing containers, you can click on them within the portal to drill-in and view blobs within them.  Summary The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. We’ll have even more new features and enhancements coming later this month – including support for the recent Windows Server 2012 and .NET 4.5 releases (we will enable new web and worker role images with Windows Server 2012 and .NET 4.5, and support .NET 4.5 with Websites).  Keep an eye out on my blog for details as these new features become available. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • GetVirtualPath not making any sense.

    - by HeavyWave
    Can anyone explain why this code with the given routes returns the first route? routes.MapRoute(null, "user/approve", new { controller = "Users", action = "Approve" }), routes.MapRoute(null, "user/{username}", new { controller = "Users", action = "Profile" }), routes.MapRoute(null, "user/{username}/{action}", new { controller = "Users" }), routes.MapRoute(null, "user/{username}/{action}/{id}", new { controller = "Users" }), routes.MapRoute(null, "search/{query}", new { controller = "Artists", action = "Search", page = 1 }), routes.MapRoute(null, "search/{query}/{page}", new { controller = "Artists", action = "Search" }), routes.MapRoute(null, "music", new { controller = "Artists", action = "Index", page = 1 }), routes.MapRoute(null, "music/page/{page}", new { controller = "Artists", action = "Index" }) var pageLinkValueDictionary = new RouteValueDictionary(this.linkWithoutPageValuesDictionary); pageLinkValueDictionary.Add("page", pageNumber); var virtualPathData = RouteTable.Routes.GetVirtualPath(this.viewContext.RequestContext, pageLinkValueDictionary); Here GetVirtualPath always returns user/approve, although there is no {page} parameter in the route. Furthermore, everything works as expected without the first route. I have found this link http://www.codeplex.com/aspnet/WorkItem/View.aspx?WorkItemId=2033 but it wasn't very helpful. It looks like GetVirtualPath was not implemented with large collections of routes in mind. I am using ASP.Net MVC 1.0.

    Read the article

  • Debugging dynamic sql + dynamic tables in MS SQL Server 2008.

    - by Hamish Grubijan
    Hi, I have a messy stored procedure which uses dynamic sql. I can debug it in runtime by adding print @sql; where @sql; is the string containing the dynamic SQL, right before I call execute (@sql);. Now, the multi-page stored procedure also creates dynamic tables and uses them in a query. I want to print those tables to the console right before I do an execute, so that I know exactly what the query is trying to do. However, the SQL Server 08 does not like that. When I try: print #temp_table; and try to compile the S.P. I get this error: The name "#temp_table" is not permitted in this context. Valid expressions are constants, constant expressions, and (in some contexts) variables. Column names are not permitted. Please help.

    Read the article

  • Stop Progressbar manual scrolling in Android

    - by Rony
    Hi experts, I have already posted my query here, but unfortunately, not getting the required support for this simple query. http://www.anddev.org/prevent_manual_moving_of_seekbar_widget-t13048.html I am using progressbar widget in my application to show the download progress. In the current state, I am able to manually move the slider back and forth while download is progressing. How can I prevent the manual moving of progressbar, instead it works only based on progressbar.setProgress(progressbar.getProgress() + 1024); -- currently this works perfect, but I need to prevent the manual moving of progressbar during download. Any help in this regard is appreciated. Regards, Rony

    Read the article

  • Service Broker, not ETL

    - by jamiet
    I have been very quiet on this blog of late and one reason for that is I have been very busy on a client project that I would like to talk about a little here. The client that I have been working for has a website that runs on a distributed architecture utilising a messaging infrastructure for communication between different endpoints. My brief was to build a system that could consume these messages and produce analytical information in near-real-time. More specifically I basically had to deliver a data warehouse however it was the real-time aspect of the project that really intrigued me. This real-time requirement meant that using an Extract transformation, Load (ETL) tool was out of the question and so I had no choice but to write T-SQL code (i.e. stored-procedures) to process the incoming messages and load the data into the data warehouse. This concerned me though – I had no way to control the rate at which data would arrive into the system yet we were going to have end-users querying the system at the same time that those messages were arriving; the potential for contention in such a scenario was pretty high and and was something I wanted to minimise as much as possible. Moreover I did not want the processing of data inside the data warehouse to have any impact on the customer-facing website. As you have probably guessed from the title of this blog post this is where Service Broker stepped in! For those that have not heard of it Service Broker is a queuing technology that has been built into SQL Server since SQL Server 2005. It provides a number of features however the one that was of interest to me was the fact that it facilitates asynchronous data processing which, in layman’s terms, means the ability to process some data without requiring the system that supplied the data having to wait for the response. That was a crucial feature because on this project the customer-facing website (in effect an OLTP system) would be calling one of our stored procedures with each message – we did not want to cause the OLTP system to wait on us every time we processed one of those messages. This asynchronous nature also helps to alleviate the contention problem because the asynchronous processing activity is handled just like any other task in the database engine and hence can wait on another task (such as an end-user query). Service Broker it was then! The stored procedure called by the OLTP system would simply put the message onto a queue and we would use a feature called activation to pick each message off the queue in turn and process it into the warehouse. At the time of writing the system is not yet up to full capacity but so far everything seems to be working OK (touch wood) and crucially our users are seeing data in near-real-time. By near-real-time I am talking about latencies of a few minutes at most and to someone like me who is used to building systems that have overnight latencies that is a huge step forward! So then, am I advocating that you all go out and dump your ETL tools? Of course not, no! What this project has taught me though is that in certain scenarios there may be better ways to implement a data warehouse system then the traditional “load data in overnight” approach that we are all used to. Moreover I have really enjoyed getting to grips with a new technology and even if you don’t want to use Service Broker you might want to consider asynchronous messaging architectures for your BI/data warehousing solutions in the future. This has been a very high level overview of my use of Service Broker and I have deliberately left out much of the minutiae of what has been a very challenging implementation. Nonetheless I hope I have caused you to reflect upon your own approaches to BI and question whether other approaches may be more tenable. All comments and questions gratefully received! Lastly, if you have never used Service Broker before and want to kick the tyres I have provided below a very simple “Service Broker Hello World” script that will create all of the objects required to facilitate Service Broker communications and then send the message “Hello World” from one place to anther! This doesn’t represent a “proper” implementation per se because it doesn’t close down down conversation objects (which you should always do in a real-world scenario) but its enough to demonstrate the capabilities! @Jamiet ----------------------------------------------------------------------------------------------- /*This is a basic Service Broker Hello World app. Have fun! -Jamie */ USE MASTER GO CREATE DATABASE SBTest GO --Turn Service Broker on! ALTER DATABASE SBTest SET ENABLE_BROKER GO USE SBTest GO -- 1) we need to create a message type. Note that our message type is -- very simple and allowed any type of content CREATE MESSAGE TYPE HelloMessage VALIDATION = NONE GO -- 2) Once the message type has been created, we need to create a contract -- that specifies who can send what types of messages CREATE CONTRACT HelloContract (HelloMessage SENT BY INITIATOR) GO --We can query the metadata of the objects we just created SELECT * FROM   sys.service_message_types WHERE name = 'HelloMessage'; SELECT * FROM   sys.service_contracts WHERE name = 'HelloContract'; SELECT * FROM   sys.service_contract_message_usages WHERE  service_contract_id IN (SELECT service_contract_id FROM sys.service_contracts WHERE name = 'HelloContract') AND        message_type_id IN (SELECT message_type_id FROM sys.service_message_types WHERE name = 'HelloMessage'); -- 3) The communication is between two endpoints. Thus, we need two queues to -- hold messages CREATE QUEUE SenderQueue CREATE QUEUE ReceiverQueue GO --more querying metatda SELECT * FROM sys.service_queues WHERE name IN ('SenderQueue','ReceiverQueue'); --we can also select from the queues as if they were tables SELECT * FROM SenderQueue   SELECT * FROM ReceiverQueue   -- 4) Create the required services and bind them to be above created queues CREATE SERVICE Sender   ON QUEUE SenderQueue CREATE SERVICE Receiver   ON QUEUE ReceiverQueue (HelloContract) GO --more querying metadata SELECT * FROM sys.services WHERE name IN ('Receiver','Sender'); -- 5) At this point, we can begin the conversation between the two services by -- sending messages DECLARE @conversationHandle UNIQUEIDENTIFIER DECLARE @message NVARCHAR(100) BEGIN   BEGIN TRANSACTION;   BEGIN DIALOG @conversationHandle         FROM SERVICE Sender         TO SERVICE 'Receiver'         ON CONTRACT HelloContract WITH ENCRYPTION=OFF   -- Send a message on the conversation   SET @message = N'Hello, World';   SEND  ON CONVERSATION @conversationHandle         MESSAGE TYPE HelloMessage (@message)   COMMIT TRANSACTION END GO --check contents of queues SELECT * FROM SenderQueue   SELECT * FROM ReceiverQueue   GO -- Receive a message from the queue RECEIVE CONVERT(NVARCHAR(MAX), message_body) AS MESSAGE FROM ReceiverQueue GO --If no messages were received and/or you can't see anything on the queues you may wish to check the following for clues: SELECT * FROM sys.transmission_queue -- Cleanup DROP SERVICE Sender DROP SERVICE Receiver DROP QUEUE SenderQueue DROP QUEUE ReceiverQueue DROP CONTRACT HelloContract DROP MESSAGE TYPE HelloMessage GO USE MASTER GO DROP DATABASE SBTest GO

    Read the article

  • asp.net dynamic sitemap with querystring? Session variable seems like overkill.

    - by Mike
    Hi, I'm using ASP.NET WebForms. I'm using the standard sitemap provider. Home User Account Entry Going to the home page should have a user selection screen. Clicking on a user should list out the user's accounts with options to edit, delete, add accounts. Selecting an account should list out all the user's account's entries with options to edit delete and add entries. How do you normally pass this information between pages? I could use the query string, but then, the sitemap doesn't work. The sitemap only has the exact page without the query string and therefore loses the information. /User/Account/List.aspx?User=123 /User/Account/Entry/List.aspx?User=123&Account=322 I could use a session variable, but this seems overkill. Thoughts and suggestions very appreciated. Thanks!

    Read the article

  • PHP mysql select results

    - by Jordan Pagaduan
    <?php $sql = " SELECT e.*, l.counts AS l_counts, l.date AS l_date, lo.counts AS lo_counts, lo.date AS lo_date FROM employee e LEFT JOIN logs l ON l.employee_id = e.employee_id LEFT JOIN logout lo ON lo.employee_id = e.employee_id WHERE e.employee_id =" .(int)$_GET['salary']; $query = mysql_query($sql); $rows = mysql_fetch_array($query); while($countlog = $rows['l_counts']) { echo $countlog; } echo $rows['first_name']; echo $rows['last_name_name']; ?> I got what I want to my first_name and last_name(get only 1 results). The l_counts I wanted to loop that thing but the result is not stop counting until my pc needs to restart. LoL. How can I get the exact result of that? I only need to get the exact results of l_counts. Thank you Jordan Pagaduan

    Read the article

  • Grails multiple databases

    - by srinath
    Hi, how can we write queries on 2 databases . I installed datasources plugin and domain classes are : class Organization { long id long company_id String name static mapping = { version false table 'organization_' id column : 'organizationId' company_id column : 'companyId' name column : 'name' } } class Assoc { Integer id Integer association_id Integer organization_id static mapping = { version false table 'assoc' id column : 'ASSOC_ID' association_id column : 'ASSOCIATION_ID' organization_id column : 'ORGANIZATION_ID' } } this is working : def org = Organization.list() def assoc = Assoc.list() and this is not working : def query = Organization.executeQuery("SELECT o.name as name, o.id FROM Organization o WHERE o.id IN (SELECT a.organization_id FROM Assoc a )") error : org.hibernate.hql.ast.QuerySyntaxException: Assoc is not mapped [SELECT o.name as name, o.id FROM org.com.domain.Organization o WHERE o.id IN (SELECT a.organization_id FROM AssocOrg a )] How can we connect with 2 databases using single query ? thanks in advance .

    Read the article

  • Best way to compare contents of two tables in Teradata?

    - by Cade Roux
    When you need to compare two tables to see what the differences are, are there any tools or shortcuts you use, or do you handcode the SQL to compare the two tables? Background: In my SQL Server environment, I created a stored procedure which inspects the metadata of the two tables/views, creates a query (as dynamic sql) which joins the two tables on the specified key columns, and compares data in the compare columns, reporting key differences and data differences. The query can either be printed and modified/copied or just excecuted as is. We are not allowed to create stored procedures in our Teradata environment, unfortunately.

    Read the article

  • Stock symbol auto-complete API

    - by Satish
    Hi, I am looking for some stock symbol look-up API. I could able to query yahoo finance with a symbol & could able to retrieve the stock price & other details. I am looking for some auto-complete stock look-up API like if i query fo "Go*" ... how can i get all stock symbols starting with GO* = Goog etc ... is there any APi for wildcard stock symbol searches Any help would be great .. Thanks

    Read the article

  • Accurately display upload progress in Silverilght upload

    - by Matt
    I'm trying to debug a file upload / download issue I'm having. I've got a Silverlight file uploader, and to transmit the files I make use of the HttpWebRequest class. So I create a connection to my file upload handler on the server and begin transmitting. While a file uploads I keep track of total bytes written to the RequestStream so I can figure out a percentage. Now working at home I've got a rather slow connection, and I think Silverlight, or the browser, is lying to me. It seems that my upload progress logic is inaccurate. When I do multiple file uploads (24 images of 3-6mb big in my testing), the logic reports the files finish uploading but I believe that it only reflects the progress of written bytes to the RequestStream, not the actual amount of bytes uploaded. What is the most accurate way to measure upload progress. Here's the logic I'm using. public void Upload() { if( _TargetFile != null ) { Status = FileUploadStatus.Uploading; Abort = false; long diff = _TargetFile.Length - BytesUploaded; UriBuilder ub = new UriBuilder( App.siteUrl + "upload.ashx" ); bool complete = diff <= ChunkSize; ub.Query = string.Format( "{3}name={0}&StartByte={1}&Complete={2}", fileName, BytesUploaded, complete, string.IsNullOrEmpty( ub.Query ) ? "" : ub.Query.Remove( 0, 1 ) + "&" ); HttpWebRequest webrequest = ( HttpWebRequest ) WebRequest.Create( ub.Uri ); webrequest.Method = "POST"; webrequest.BeginGetRequestStream( WriteCallback, webrequest ); } } private void WriteCallback( IAsyncResult asynchronousResult ) { HttpWebRequest webrequest = ( HttpWebRequest ) asynchronousResult.AsyncState; // End the operation. Stream requestStream = webrequest.EndGetRequestStream( asynchronousResult ); byte[] buffer = new Byte[ 4096 ]; int bytesRead = 0; int tempTotal = 0; Stream fileStream = _TargetFile.OpenRead(); fileStream.Position = BytesUploaded; while( ( bytesRead = fileStream.Read( buffer, 0, buffer.Length ) ) != 0 && tempTotal + bytesRead < ChunkSize && !Abort ) { requestStream.Write( buffer, 0, bytesRead ); requestStream.Flush(); BytesUploaded += bytesRead; tempTotal += bytesRead; int percent = ( int ) ( ( BytesUploaded / ( double ) _TargetFile.Length ) * 100 ); UploadPercent = percent; if( UploadProgressChanged != null ) { UploadProgressChangedEventArgs args = new UploadProgressChangedEventArgs( percent, bytesRead, BytesUploaded, _TargetFile.Length, _TargetFile.Name ); SmartDispatcher.BeginInvoke( () => UploadProgressChanged( this, args ) ); } } //} // only close the stream if it came from the file, don't close resizestream so we don't have to resize it over again. fileStream.Close(); requestStream.Close(); webrequest.BeginGetResponse( ReadCallback, webrequest ); }

    Read the article

  • Is this valid EJB-QL?

    - by Yishai
    I have the following construct in EJB-QL several EJB 2.1 finder methods: SELECT distinct OBJECT(rd) FROM RequestDetail rd, DetailResponse dr WHERE dr.updateReqResponseParentID is not null and dr.updateReqResponseParentID = ?1 and rd.requestDetailID = dr.requestDetailID and rd.deleted is null and dr.deleted is null IDEA's EJB-QL inspection flags the use of the two object FROM RequestDetail rd, DetailResponse dr with an inspection which says: Several ranged variable declarations are not supported, use collection member declarations instead (e.g. IN(o.lineItems)) The queries themselves function fine (as in return the expected results) on JBoss 4.2. Is IDEA all wet here, or is there a valid issue with the query? And what is the actual preferred alternative syntax for such a query?

    Read the article

  • Rails 2.3.x, named_scope chaining with INNER JOIN complication

    - by randombits
    I have two hypothetical classes, Foo and Bar. Foo contains many Bars. Bar can only belong to one Foo. Ultimately the SQL query I'm trying to make happen looks like the following: SELECT * from bar INNER JOIN foo ON bar.foo_id = foo.id where bar.in_use = 0 and bar.customer_id = 1 and foo.category = 0 That query does what I need. Now I'm trying to break the problem down in Rails using chained named_scopes. First, the straight forward in_use and customer_id scopes I have set: named_scope :available, :conditions => { :in_use => 0 } named_scope :not_available, :conditions => { :in_use => 1 } named_scope :customer, lambda { |num| { :conditions => { :customer_id => num } } } Now the part I'm stuck at, is I'm trying to do something like this in my code: abar = Bar.available.customer(1).category(0) how and where do I put the category named_scope to make this work?

    Read the article

  • class hierarchy design for small java project

    - by user523956
    I have written a java code which does following:- Main goal is to fetch emails from (inbox, spam) folders and store them in database. It fetches emails from gmail,gmx,web.de,yahoo and Hotmail. Following attributes are stored in mysql database. Slno, messagedigest, messageid, foldername, dateandtime, receiver, sender, subject, cc, size and emlfile. For gmail,gmy and web.de, I have used javamail API, because email form it can be fetched with IMAP. For yahoo and hotmail, I have used html parser and httpclient to fetch emails form spam folder and for inbox folder, I have used pop3 javamail API. I want to have proper class hierarchy which makes my code efficient and easily reusable. As of now I have designed class hierarchy as below: I am sure it can still be improved. So I would like to have different opinions on it. I have following classes and methods as of now. MainController:- Here I pass emailid, password and foldername from which emails have to be fetched. Abstract Class :-EmailProtocol Abstract Methods of it (All methods except executeParser contains method definition):- connectImap() // used by gmx,gmail and web.de email ids connectPop3() // used by hotmail and yahoo to fetch emails of inbox folder createMessageDigest // used by every email provider(gmx, gmail,web.de,yahoo,hotmail) establishDBConnection // used by every email emailAlreadyExists // used by every email which checks whether email already exists in db or not, if not then store it. storeemailproperties // used by every email to store emails properties to mysql database executeParser // nothing written in it. Overwridden and used by just hotmail and yahoo to fetch emails form spam folder. Imap extends EmailProtocol (nothing in it. But I have to have it to access methods of EmailProtocol. This is used to fetch emails from gmail,gmx and web.de) I know this is really a bad way but don't know how to do it other way. Hotmsil extends EmailProtocol Methods:- executeParser() :- This is used by just hotmail email id. fetchjunkemails() :- This is also very specific for only hotmail email id. Yahoo extends EmailProtocol Methods:- executeParser() storeEmailtotemptable() MoveEmailtoInbox() getFoldername() nullorEquals() All above methods are specific for yahoo email id. public DateTimeFormat(class) format() //this formats datetime of gmax,gmail and web.de emails. formatYahoodate //this formats datetime of yahoo email. formatHotmaildate // this formats datetime of hotmail email. public StringFormat ConvertStreamToString() // Accessed by every class except DateTimeFormat class. formatFromTo() // Accessed by every class except DateTimeFormat class. public Class CheckDatabaseExistance public static void checkForDatabaseTablesAvailability() (This method checks at the beginnning whether database and required tables exist in mysql or not. if not it creates them) Please see code of my MainController class so that You can have an idea about how I use different classes. public class MainController { public static void main(String[] args) throws Exception { ArrayList<String> web_de_folders = new ArrayList<String>(); web_de_folders.add("INBOX"); web_de_folders.add("Unbekannt"); web_de_folders.add("Spam"); web_de_folders.add("OUTBOX"); web_de_folders.add("SENT"); web_de_folders.add("DRAFTS"); web_de_folders.add("TRASH"); web_de_folders.add("Trash"); ArrayList<String> gmx_folders = new ArrayList<String>(); gmx_folders.add("INBOX"); gmx_folders.add("Archiv"); gmx_folders.add("Entwürfe"); gmx_folders.add("Gelöscht"); gmx_folders.add("Gesendet"); gmx_folders.add("Spamverdacht"); gmx_folders.add("Trash"); ArrayList<String> gmail_folders = new ArrayList<String>(); gmail_folders.add("Inbox"); gmail_folders.add("[Google Mail]/Spam"); gmail_folders.add("[Google Mail]/Trash"); gmail_folders.add("[Google Mail]/Sent Mail"); ArrayList<String> pop3_folders = new ArrayList<String>(); pop3_folders.add("INBOX"); CheckDatabaseExistance.checkForDatabaseTablesAvailability(); EmailProtocol imap = new Imap(); System.out.println("CHECKING FOR NEW EMAILS IN WEB.DE...(IMAP)"); System.out.println("*********************************************************************************"); imap.connectImap("[email protected]", "pwd", web_de_folders); System.out.println("\nCHECKING FOR NEW EMAILS IN GMX.DE...(IMAP)"); System.out.println("*********************************************************************************"); imap.connectImap("[email protected]", "pwd", gmx_folders); System.out.println("\nCHECKING FOR NEW EMAILS IN GMAIL...(IMAP)"); System.out.println("*********************************************************************************"); imap.connectImap("[email protected]", "pwd", gmail_folders); EmailProtocol yahoo = new Yahoo(); Yahoo y=new Yahoo(); System.out.println("\nEXECUTING YAHOO PARSER"); System.out.println("*********************************************************************************"); y.executeParser("http://de.mc1321.mail.yahoo.com/mc/welcome?ymv=0","[email protected]","pwd"); System.out.println("\nCHECKING FOR NEW EMAILS IN INBOX OF YAHOO (POP3)"); System.out.println("*********************************************************************************"); yahoo.connectPop3("[email protected]","pwd",pop3_folders); System.out.println("\nCHECKING FOR NEW EMAILS IN INBOX OF HOTMAIL (POP3)"); System.out.println("*********************************************************************************"); yahoo.connectPop3("[email protected]","pwd",pop3_folders); EmailProtocol hotmail = new Hotmail(); Hotmail h=new Hotmail(); System.out.println("\nEXECUTING HOTMAIL PARSER"); System.out.println("*********************************************************************************"); h.executeParser("https://login.live.com/ppsecure/post.srf","[email protected]","pwd"); } } I have kept DatetimeFormat and StringFormat class public so that I can access its public methods by just (DatetimeFormat.formatYahoodate for e.g. from different methods). This is the first time I have developed something in java. It serves its purpose but of course code is still not so efficient I think. I need your suggestions on this project.

    Read the article

  • Linq to SQL problem

    - by Ronnie Overby
    I have a local collection of recordId's (integers). I need to retrieve records that have every one of their child records' ids in that local collection. Here is my query: public List<int> OwnerIds { get; private set; } ... filteredPatches = from p in filteredPatches where OwnerIds.All(o => p.PatchesOwners.Select(x => x.OwnerId).Contains(o)) select p; I am getting this error: Local sequence cannot be used in Linq to SQL implementation of query operators except the Contains() operator. I get that .All() isn't supported by Linq to SQL, but is there a way to do what I am trying to do?

    Read the article

  • Web Application Project Deployment VS2010 - Precompile Views

    - by Malcolm Frexner
    Using Visual Studio 2010 Ultimate I created a ASP.NET MVC 2.0 Web Application. I read http://msdn.microsoft.com/query/dev10.query?appId=Dev10IDEF1&l=EN-US&k=k%28WEBAPPLICATIONPROJECTS.PACKAGEPUBLISHOVERVIEW%29;k%28TargetFrameworkMoniker-%22.NETFRAMEWORK%2cVERSION%3dV4.0%22%29&rd=true. Its about the new features for Web Application Deployment. I dont see an option to precompile Views. Also I dont see other options that where available in previous version of Web Deployment Project: ie if the web is compiled into a single assembly or into one assembly per page. I had the impression that Application Project Deployment is the scuccessor of Web Deployment Project.. maybe I am wrong about it. How should I precompile views now?

    Read the article

  • Squid parent cache for text/html only

    - by Salvador
    How do I configure the squid to only request text/html to the parent cache; right now I am using : cache_peer 127.0.0.1 parent 8080 0 no-query no-digest on the second hand I get a lot of direct request that do not use the parent proxy: some queries go like FIRST_UP_PARENT and some like DIRECT, how do I tell the squid to always use parent for text/html BTW .. is a transparent proxy I have tried : cache_peer 127.0.0.1 parent 8080 0 no-query no-digest acl elhtml req_mime_type -i ^text/html$ acl elhtml req_mime_type -i text/html cache_peer_access 127.0.0.1 allow elhtml cache_peer_access 127.0.0.1 deny all and it does not works Thanks in advance for the help.

    Read the article

  • Lucene.NET search index approach

    - by Tim Peel
    Hi, I am trying to put together a test case for using Lucene.NET on one of our websites. I'd like to do the following: Index in a single unique id. Index across a comma delimitered string of terms or tags. For example. Item 1: Id = 1 Tags = Something,Separated-Term I will then be structuring the search so I can look for documents against tag i.e. tags:something OR tags:separate-term I need to maintain the exact term value in order to search against it. I have something running, and the search query is being parsed as expected, but I am not seeing any results. Here's some code. My parser (_luceneAnalyzer is passed into my indexing service): var parser = new QueryParser(Lucene.Net.Util.Version.LUCENE_CURRENT, "Tags", _luceneAnalyzer); parser.SetDefaultOperator(QueryParser.Operator.AND); return parser; My Lucene.NET document creation: var doc = new Document(); var id = new Field( "Id", NumericUtils.IntToPrefixCoded(indexObject.id), Field.Store.YES, Field.Index.NOT_ANALYZED, Field.TermVector.NO); var tags = new Field( "Tags", string.Join(",", indexObject.Tags.ToArray()), Field.Store.NO, Field.Index.ANALYZED, Field.TermVector.YES); doc.Add(id); doc.Add(tags); return doc; My search: var parser = BuildQueryParser(); var query = parser.Parse(searchQuery); var searcher = Searcher; TopDocs hits = searcher.Search(query, null, max); IList<SearchResult> result = new List<SearchResult>(); float scoreNorm = 1.0f / hits.GetMaxScore(); for (int i = 0; i < hits.scoreDocs.Length; i++) { float score = hits.scoreDocs[i].score * scoreNorm; result.Add(CreateSearchResult(searcher.Doc(hits.scoreDocs[i].doc), score)); } return result; I have two documents in my index, one with the tag "Something" and one with the tags "Something" and "Separated-Term". It's important for the - to remain in the terms as I want an exact match on the full value. When I search with "tags:Something" I do not get any results. Question What Analyzer should I be using to achieve the search index I am after? Are there any pointers for putting together a search such as this? Why is my current search not returning any results? Many thanks

    Read the article

  • Return pre-UPDATE column values in PostgreSQL without using triggers, functions or other "magic"

    - by Python Larry
    I have a related question, but this is another part of MY puzzle. I would like to get the OLD VALUE of a Column from a Row that was UPDATEd... WITHOUT using Triggers (nor Stored Procedures, nor any other extra, non-SQL/-query entities). The query I have is like this: UPDATE my_table SET processing_by = our_id_info -- unique to this instance WHERE trans_nbr IN ( SELECT trans_nbr FROM my_table GROUP BY trans_nbr HAVING COUNT(trans_nbr) > 1 LIMIT our_limit_to_have_single_process_grab ) RETURNING row_id If I could do "FOR UPDATE ON my_table" at the end of the subquery, that'd be devine (and fix my other question/problem). But, that won't work: can't have this AND a "GROUP BY" (which is necessary for figuring out the COUNT of trans_nbr's). Then I could just take those trans_nbr's and do a query first to get the (soon-to-be-) former processing_by values. I've tried doing like: UPDATE my_table SET processing_by = our_id_info -- unique to this instance FROM my_table old_my_table JOIN ( SELECT trans_nbr FROM my_table GROUP BY trans_nbr HAVING COUNT(trans_nbr) > 1 LIMIT our_limit_to_have_single_process_grab ) sub_my_table ON old_my_table.trans_nbr = sub_my_table.trans_nbr WHERE my_table.trans_nbr = sub_my_table.trans_nbr AND my_table.processing_by = old_my_table.processing_by RETURNING my_table.row_id, my_table.processing_by, old_my_table.processing_by But that can't work; "old_my_table" is not viewable outside of the join; the RETURNING clause is blind to it. I've long since lost count of all the attempts I've made; I have been researching this for literally hours. If I could just find a bullet-proof way to lock the rows in my subquery - and ONLY those rows, and WHEN the subquery happens - all the concurrency issues I'm trying to avoid disappear... UPDATE: [WIPES EGG OFF FACE] Okay, so I had a typo in the non-generic code of the above that I wrote "doesn't work"; it does... thanks to Erwin Brandstetter, below, who stated it would, I re-did it (after a night's sleep, refreshed eyes, and a banana for bfast). Since it took me so long/hard to find this sort of solution, perhaps my embarrassment is worth it? At least this is on SO for posterity now... : What I now have (that works) is like this: UPDATE my_table SET processing_by = our_id_info -- unique to this instance FROM my_table AS old_my_table WHERE trans_nbr IN ( SELECT trans_nbr FROM my_table GROUP BY trans_nbr HAVING COUNT(*) > 1 LIMIT our_limit_to_have_single_process_grab ) AND my_table.row_id = old_my_table.row_id RETURNING my_table.row_id, my_table.processing_by, old_my_table.processing_by AS old_processing_by The COUNT(*) is per a suggestion from Flimzy in a comment on my other (linked above) question. (I was more specific than necessary. [In this instance.])

    Read the article

  • Custom Lookup Provider For NetBeans Platform CRUD Tutorial

    - by Geertjan
    For a long time I've been planning to rewrite the second part of the NetBeans Platform CRUD Application Tutorial to integrate the loosely coupled capabilities introduced in a seperate series of articles based on articles by Antonio Vieiro (a great series, by the way). Nothing like getting into the Lookup stuff right from the get go (rather than as an afterthought)! The question, of course, is how to integrate the loosely coupled capabilities in a logical way within that tutorial. Today I worked through the tutorial from scratch, up until the point where the prototype is completed, i.e., there's a JTextArea displaying data pulled from a database. That brought me to the place where I needed to be. In fact, as soon as the prototype is completed, i.e., the database connection has been shown to work, the whole story about Lookup.Provider and InstanceContent should be introduced, so that all the subsequent sections, i.e., everything within "Integrating CRUD Functionality" will be done by adding new capabilities to the Lookup.Provider. However, before I perform open heart surgery on that tutorial, I'd like to run the scenario by all those reading this blog who understand what I'm trying to do! (I.e., probably anyone who has read this far into this blog entry.) So, this is what I propose should happen and in this order: Point out the fact that right now the database access code is found directly within our TopComponent. Not good. Because you're mixing view code with data code and, ideally, the developers creating the user interface wouldn't need to know anything about the data access layer. Better to separate out the data access code into a separate class, within the CustomerLibrary module, i.e., far away from the module providing the user interface, with this content: public class CustomerDataAccess { public List<Customer> getAllCustomers() { return Persistence.createEntityManagerFactory("CustomerLibraryPU"). createEntityManager().createNamedQuery("Customer.findAll").getResultList(); } } Point out the fact that there is a concept of "Lookup" (which readers of the tutorial should know about since they should have followed the NetBeans Platform Quick Start), which is a registry into which objects can be published and to which other objects can be listening. In the same way as a TopComponent provides a Lookup, as demonstrated in the NetBeans Platform Quick Start, your own object can also provide a Lookup. So, therefore, let's provide a Lookup for Customer objects.  import org.openide.util.Lookup; import org.openide.util.lookup.AbstractLookup; import org.openide.util.lookup.InstanceContent; public class CustomerLookupProvider implements Lookup.Provider { private Lookup lookup; private InstanceContent instanceContent; public CustomerLookupProvider() { // Create an InstanceContent to hold capabilities... instanceContent = new InstanceContent(); // Create an AbstractLookup to expose the InstanceContent... lookup = new AbstractLookup(instanceContent); // Add a "Read" capability to the Lookup of the provider: //...to come... // Add a "Update" capability to the Lookup of the provider: //...to come... // Add a "Create" capability to the Lookup of the provider: //...to come... // Add a "Delete" capability to the Lookup of the provider: //...to come... } @Override public Lookup getLookup() { return lookup; } } Point out the fact that, in the same way as we can publish an object into the Lookup of a TopComponent, we can now also publish an object into the Lookup of our CustomerLookupProvider. Instead of publishing a String, as in the NetBeans Platform Quick Start, we'll publish an instance of our own type. And here is the type: public interface ReadCapability { public void read() throws Exception; } And here is an implementation of our type added to our Lookup: public class CustomerLookupProvider implements Lookup.Provider { private Set<Customer> customerSet; private Lookup lookup; private InstanceContent instanceContent; public CustomerLookupProvider() { customerSet = new HashSet<Customer>(); // Create an InstanceContent to hold capabilities... instanceContent = new InstanceContent(); // Create an AbstractLookup to expose the InstanceContent... lookup = new AbstractLookup(instanceContent); // Add a "Read" capability to the Lookup of the provider: instanceContent.add(new ReadCapability() { @Override public void read() throws Exception { ProgressHandle handle = ProgressHandleFactory.createHandle("Loading..."); handle.start(); customerSet.addAll(new CustomerDataAccess().getAllCustomers()); handle.finish(); } }); // Add a "Update" capability to the Lookup of the provider: //...to come... // Add a "Create" capability to the Lookup of the provider: //...to come... // Add a "Delete" capability to the Lookup of the provider: //...to come... } @Override public Lookup getLookup() { return lookup; } public Set<Customer> getCustomers() { return customerSet; } } Point out that we can now create a new instance of our Lookup (in some other module, so long as it has a dependency on the module providing the CustomerLookupProvider and the ReadCapability), retrieve the ReadCapability, and then do something with the customers that are returned, here in the rewritten constructor of the TopComponent, without needing to know anything about how the database access is actually achieved since that is hidden in the implementation of our type, above: public CustomerViewerTopComponent() { initComponents(); setName(Bundle.CTL_CustomerViewerTopComponent()); setToolTipText(Bundle.HINT_CustomerViewerTopComponent()); // EntityManager entityManager = Persistence.createEntityManagerFactory("CustomerLibraryPU").createEntityManager(); // Query query = entityManager.createNamedQuery("Customer.findAll"); // List<Customer> resultList = query.getResultList(); // for (Customer c : resultList) { // jTextArea1.append(c.getName() + " (" + c.getCity() + ")" + "\n"); // } CustomerLookupProvider lookup = new CustomerLookupProvider(); ReadCapability rc = lookup.getLookup().lookup(ReadCapability.class); try { rc.read(); for (Customer c : lookup.getCustomers()) { jTextArea1.append(c.getName() + " (" + c.getCity() + ")" + "\n"); } } catch (Exception ex) { Exceptions.printStackTrace(ex); } } Does the above make as much sense to others as it does to me, including the naming of the classes? Feedback would be appreciated! Then I'll integrate into the tutorial and do the same for the other sections, i.e., "Create", "Update", and "Delete". (By the way, of course, the tutorial ends up showing that, rather than using a JTextArea to display data, you can use Nodes and explorer views to do so.)

    Read the article

  • cakephp and SQL_CALC_FOUND_ROWS

    - by Lizard
    I am trying to add the SQL_CALC_FOUND_ROWS into a query (Please note this isn't for pagination) please note I am trying to add this to a cakePHP query the code I currently have is below: return $this->find('all', array( 'conditions' => $conditions, 'fields'=>array('SQL_CALC_FOUND_ROWS','Category.*','COUNT(`Entity`.`id`) as `entity_count`'), 'joins' => array('LEFT JOIN `entities` AS Entity ON `Entity`.`category_id` = `Category`.`id`'), 'group' => '`Category`.`id`', 'order' => $sort, 'limit'=>$params['limit'], 'offset'=>$params['start'], 'contain' => array('Domain' => array('fields' => array('title'))) )); Note the 'fields'=>array('SQL_CALC_FOUND_ROWS',' this obviously doesn't work as It tries to apply the SQL_CALC_FOUND_ROWS to the table e.g. SELECTCategory.SQL_CALC_FOUND_ROWS, Is there anyway of doing this? Any help would be greatly appreciated, thanks.

    Read the article

< Previous Page | 790 791 792 793 794 795 796 797 798 799 800 801  | Next Page >