Search Results

Search found 13974 results on 559 pages for 'include'.

Page 325/559 | < Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >

  • JasperReports: is it possible to use multiple data sources, or if not, to use collections in paramet

    - by Knut Arne Vedaa
    It seems that the reporting idiom is that a report consist of a single list of items, with some additional data (parameters). Are there ways to include several unrelated lists in a report, or would this go against the idiom to such an extent that a different tool should rather be used to generate the output? Suppose, for instance, you have a list of Persons that lives in a Building, with names, phone numbers and so on. This list would be the main datasource. Additionally, on the same report you want to show various other information about that Building, such as address, number of floors and so on. The number of items in this information might vary between Buildings, so that you cannot simply put it into static parameters, but would need a map or a list. This is of course a contrieved example, but should serve to illustrate the problem. In short: can you use several unrelated lists in a report?

    Read the article

  • Copy task in Visual Studio 2008

    - by Maurizio Reginelli
    This is a question related to a previous question I posted (see here). I need to configure a .csproj file to copy some files from a directory to another one (let me call them SOURCE and DESTINATION). I also need to change the SOURCE path depending from the configuration I'm using. For example, if I compile my project in Debug mode, the SOURCE path must contain a subfolder called Debug. I tried the solution proposed by Schmitt in the previous post, using $(ConfigurationName) to set the dynamic directory into the SOURCE path. When I opened the solution containing that project, a list of links to the source files appeared in the main tree of the project and they were correctly related to the Debug mode. But when I changed to the Release mode, I saw that the path of the linked source files were set again to the Debug version. Is there a way to specify a parameter in the Include attribute of the SourceFiles element? Thank you.

    Read the article

  • Eclipse CDT: Import source / header files into my new project, without duplicating them

    - by Tom
    Hi all, Im sure there is a very simple solution for this. I have a bunch of .cpp / .h files from a project, say in directory ~/files On the other hand, I want to create a c++ project using eclipse to work on those files, so I put my workspace on ~/wherever. Then I create a c++ project: ~/wherever/project, and include the source files (located in /~files). The problem i'm having is that files are now duplicated in ~/wherever/project, and I would like to avoid that, specially so I know which copy of the file to commit. Is this possible? Im sure it is, but cant get it. Thanks in advance.

    Read the article

  • What's the best way to create a Windows Mobile application with multiple screens in C#

    - by Joseph Earl
    I am creating a Windows Mobile Application in C# and Visual Studio 2008. The application will have 5-6 main 'screens'. There will also be bar (/area) with information (e.g. a title, whether the app is busy, etc) above the screens, and a toolbar (or similar control) below the screens with 5-6 buttons (with images) to change the active screen (i.e. the screens will share the top bar and toolbar) What is the best way to implement this? Use multiple forms, and just include the toolbar and top-bar in each Use a single form and something like the Tab control (but customised) to contain the screens Something else? Keeping in mind a) memory usage and b) time to switch screens. Thanks in advance. Any links, pointers etc are much appreciated.

    Read the article

  • What causes a JRE 6 JVM code cache leak?

    - by Arturo Knight
    Since switching to JRE 6, my server's code cache usage (non-heap) keeps growing indefinitely. My application creates a lot of classes at runtime, BUT these classes are successfully unloaded during the GC process. I can see these classes getting unloaded in the gc logs and also the permGen usage stays constant. I specifically make sure in my code that these classes are orphaned once I am finished with them and so they correctly get garbage collected from permGen. The code cache however keeps growing. I only became aware of the code cache after switching to JRE 6. So I guess my questions are: Does GC include the code cache? What could cause a code cache memory leak, specifically. Is there a bug in JDK 6 in this area?

    Read the article

  • Distributing APNS providers

    - by Sam
    I'm writing a business-focused iPhone app which includes a self-hosted server component. I'd like to include push notification functionality in the server; reading through the programming guide it looks as if this would involve either: Distributing the provider certificate with the server component - this doesn't sound like a terribly good idea (even if Apple permits it?) Hosting a shared notification provider and forwarding notifications to APNS from the servers. For an ongoing, high-availability service, this is likely to require including a subscription pricing component, which I would prefer to avoid. Require customers to apply for their own provider certificate. However, it's not clear whether multiple organisations are allowed to apply for provider certificates with a single bundle ID, and it would significantly increase the barrier to adoption. APNS looks to me as if it's specifically geared for centrally hosted services. Is anyone distributing self-hosted notification providers? Are there any other options?

    Read the article

  • Ajax Tab control in master page

    - by Senthilkumar
    Hi All I am using ajax tab container in master page with more than 4 tab panels, but when i include asp:contentplaceholder in content template of each tab panel i am not able to see the second or the third page (aspx) when i click on the tab.. i searched through net but didn get any answers. Pl help me. only the first page (aspx) included in content place holder is getting loaded the sencond asd so pages in tab panel are not// .. my example code here (.. only tis loaded) (.. not this one even when i clicked it)

    Read the article

  • XStream <-> Alternative binary formats (e.g. protocol buffers)

    - by sehugg
    We currently use XStream for encoding our web service inputs/outputs in XML. However we are considering switching to a binary format with code generator for multiple languages (protobuf, Thrift, Hessian, etc) to make supporting new clients easier and less reliant on hand-coding (also to better support our message formats which include binary data). However most of our objects on the server are POJOs with XStream handling the serialization via reflection and annotations, and most of these libraries assume they will be generating the POJOs themselves. I can think of a few ways to interface an alternative library: Write an XStream marshaler for the target format. Write custom code to marshal the POJOs to/from the classes generated by the alternative library. Subclass the generated classes to implement the POJO logic. May require some rewriting. (Also did I mention we want to use Terracotta?) Use another library that supports both reflection (like XStream) and code generation. However I'm not sure which serialization library would be best suited to the above techniques.

    Read the article

  • Som maps problem in matlab

    - by Serdar Demir
    I have a text file that include data. My text file: young, myopic, no, reduced, no young, myopic, no, normal, soft young, myopic, yes, reduced, no young, myopic, yes, normal, hard young, hyperopia, no, reduced, no young, hyperopia, no, normal, soft young, hyperopia, yes, reduced, no young, hyperopia, yes, normal, hard I read my text file load method %young=1 %myopic=2 %no=3 etc. load iris.txt net = newsom(1,[1 5]); [net,tr] = train(net,1); plotsomplanes(net); Error code: ??? Undefined function or method 'plotsomplanes' for input arguments of type 'network'.

    Read the article

  • How to search an XML when parsing it using SAX in nokogiri

    - by ralph
    I have a simple but huge xml file like below. I want to parse it using SAX and only print out text between the title tag. <root> <site>some site</site> <title>good title</title> </root> I have the following code: require 'rubygems' require 'nokogiri' include Nokogiri class PostCallbacks < XML::SAX::Document def start_element(element, attributes) if element == 'title' puts "found title" end end def characters(text) puts text end end parser = XML::SAX::Parser.new(PostCallbacks.new) parser.parse_file("myfile.xml") problem is that it prints text between all the tags. How can I just print text between the title tag?

    Read the article

  • Equivalents to Z80 DJNZ instruction on other architectures?

    - by Justin Ethier
    First a little background. The z80 CPU has an instruction called DJNZ which can be used in a similar manner as a for loop. Basically DJNZ decrements the B register and jumps to a label if not zero. For example: ld b,96 ; erase all of the line disp_version_erase_loop: call _vputblank ; erase pixels at cursor (uses b reg) djnz disp_version_erase_loop ; loop Of course you can do the same thing using regular comparison and jump instructions, but often it is handy to use the single instruction. With that out of the way, my question is, do other CPU architectures include a similar control instruction?

    Read the article

  • Why callback functions needs to be static when declared in class

    - by Dave17
    I was trying to declare a callback function in class and then somewhere i read the function needs to be static but It didn't explain why? #include <iostream> using std::cout; using std::endl; class Test { public: Test() {} void my_func(void (*f)()) { cout << "In My Function" << endl; f(); //Invoke callback function } static void callback_func() {cout << "In Callback function" << endl;} }; int main() { Test Obj; Obj.my_func(t.callback_func); }

    Read the article

  • Overview of SOA Diagnostics in 11.1.1.6

    - by ShawnBailey
    What tools are available for diagnosing SOA Suite issues? There are a variety of tools available to help you and Support diagnose SOA Suite issues in 11g but it can be confusing as to which tool is appropriate for a particular situation and what their relationships are. This blog post will introduce the various tools and attempt to clarify what each is for and how they are related. Let's first list the tools we'll be addressing: RDA: Remote Diagnostic Agent DFW: Diagnostic Framework Selective Tracing DMS: Dynamic Monitoring Service ODL: Oracle Diagnostic Logging ADR: Automatic Diagnostics Repository ADRCI: Automatic Diagnostics Repository Command Interpreter WLDF: WebLogic Diagnostic Framework This overview is not mean to be a comprehensive guide on using all of these tools, however, extensive reference materials are included that will provide many more details on their execution. Another point to note is that all of these tools are applicable for Fusion Middleware as a whole but specific products may or may not have implemented features to leverage them. A couple of the tools have a WebLogic Scripting Tool or 'WLST' interface. WLST is a command interface for executing pre-built functions and custom scripts against a domain. A detailed WLST tutorial is beyond the scope of this post but you can find general information here. There are more specific resources in the below sections. In this post when we refer to 'Enterprise Manager' or 'EM' we are referring to Enterprise Manager Fusion Middleware Control. RDA (Remote Diagnostic Agent) RDA is a standalone tool that is used to collect both static configuration and dynamic runtime information from the SOA environment. RDA is generally run manually from the command line against a domain or single server. When opening a new Service Request, including an RDA collection can dramatically decrease the back and forth required to collect logs and configuration information for Support. After installing RDA you configure it to use the SOA Suite module as decribed in the referenced resources. The SOA module includes the Oracle WebLogic Server (WLS) module by default in order to include all of the relevant information for the environment. In addition to this basic configuration there is also an advanced mode where you can set the number of thread dumps for the collections, log files, Incidents, etc. When would you use it? When creating a Service Request or otherwise working with Oracle resources on an issue, capturing environment snapshots to baseline your configuration or to diagnose an issue on your own. How is it related to the other tools? RDA is related to DFW in that it collects the last 10 Incidents from the server by default. In a similar manner, RDA is related to ODL through its collection of the diagnostic logs and these may contain information from Selective Tracing sessions. Examples of what it currently collects: (for details please see the links in the Resources section) Diagnostic Logs (ODL) Diagnostic Framework Incidents (DFW) SOA MDS Deployment Descriptors SOA Repository Summary Statistics Thread Dumps Complete Domain Configuration RDA Resources: Webcast Recording: Using RDA with Oracle SOA Suite 11g Blog Post: Diagnose SOA Suite 11g Issues Using RDA Download RDA How to Collect Analysis Information Using RDA for Oracle SOA Suite 11g Products [ID 1350313.1] How to Collect Analysis Information Using RDA for Oracle SOA Suite and BPEL Process Manager 11g [ID 1352181.1] Getting Started With Remote Diagnostic Agent: Case Study - Oracle WebLogic Server (Video) [ID 1262157.1] top DFW (Diagnostic Framework) DFW provides the ability to collect specific information for a particular problem when that problem occurs. DFW is included with your SOA Suite installation and deployed to the domain. Let's define the components of DFW. Diagnostic Dumps: Specific diagnostic collections that are defined at either the 'system' or product level. Examples would be diagnostic logs or thread dumps. Incident: A collection of Diagnostic Dumps associated with a particular problem Log Conditions: An Oracle Diagnostic Logging event that DFW is configured to listen for. If the event is identified then an Incident will be created. WLDF Watch: The WebLogic Diagnostic Framework or 'WLDF' is not a component of DFW, however, it can be a source of DFW Incident creation through the use of a 'Watch'. WLDF Notification: A Notification is a component of WLDF and is the link between the Watch and DFW. You can configure multiple Notification types in WLDF and associate them with your Watches. 'FMWDFW-notification' is available to you out of the box to allow for DFW notification of Watch execution. Rule: Defines a WLDF Watch or Log Condition for which we want to associate a set of Diagnostic Dumps. When triggered the specified dumps will be collected and added to the Incident Rule Action: Defines the specific Diagnostic Dumps to collect for a particular rule ADR: Automatic Diagnostics Repository; Defined for every server in a domain. This is where Incidents are stored Now let's walk through a simple flow: Oracle Web Services error message OWS-04086 (SOAP Fault) is generated on managed server 1 DFW Log Condition for OWS-04086 evaluates to TRUE DFW creates a new Incident in the ADR for managed server 1 DFW executes the specified Diagnostic Dumps and adds the output to the Incident In this case we'll grab the diagnostic log and thread dump. We might also want to collect the WSDL binding information and SOA audit trail When would you use it? When you want to automatically collect Diagnostic Dumps at a particular time using a trigger or when you want to manually collect the information. In either case it can be readily uploaded to Oracle Support through the Service Request. How is it related to the other tools? DFW generates Incidents which are collections of Diagnostic Dumps. One of the system level Diagonstic Dumps collects the current server diagnostic log which is generated by ODL and can contain information from Selective Tracing sessions. Incidents are included in RDA collections by default and ADRCI is a tool that is used to package an Incident for upload to Oracle Support. In addition, both ODL and DMS can be used to trigger Incident creation through DFW. The conditions and rules for generating Incidents can become quite complicated and the below resources go into more detail. A simpler approach to leveraging at least the Diagnostic Dumps is through WLST (WebLogic Scripting Tool) where there are commands to do the following: Create an Incident Execute a single Diagnostic Dump Describe a Diagnostic Dump List the available Diagnostic Dumps The WLST option offers greater control in what is generated and when. It can be a great help when collecting information for Support. There are overlaps with RDA, however, DFW is geared towards collecting specific runtime information when an issue occurs while existing Incidents are collected by RDA. There are 3 WLDF Watches configured by default in a SOA Suite 11g domain: Stuck Threads, Unchecked Exception and Deadlock. These Watches are enabled by default and will generate Incidents in ADR. They are configured to reset automatically after 30 seconds so they have the potential to create multiple Incidents if these conditions are consistent. The Incidents generated by these Watches will only contain System level Diagnostic Dumps. These same System level Diagnostic Dumps will be included in any application scoped Incident as well. Starting in 11.1.1.6, SOA Suite is including its own set of application scoped Diagnostic Dumps that can be executed from WLST or through a WLDF Watch or Log Condition. These Diagnostic Dumps can be added to an Incident such as in the earlier example using the error code OWS-04086. soa.config: MDS configuration files and deployed-composites.xml soa.composite: All artifacts related to the deployed composite soa.wsdl: Summary of endpoints configured for the composite soa.edn: EDN configuration summary if applicable soa.db: Summary DB information for the SOA repository soa.env: Coherence cluster configuration summary soa.composite.trail: Partial audit trail information for the running composite The current release of RDA has the option to collect the soa.wsdl and soa.composite Diagnostic Dumps. More Diagnostic Dumps for SOA Suite products are planned for future releases along with enhancements to DFW itself. DFW Resources: Webcast Recording: SOA Diagnostics Sessions: Diagnostic Framework Diagnostic Framework Documentation DFW WLST Command Reference Documentation for SOA Diagnostic Dumps in 11.1.1.6 top Selective Tracing Selective Tracing is a facility available starting in version 11.1.1.4 that allows you to increase the logging level for specific loggers and for a specific context. What this means is that you have greater capability to collect needed diagnostic log information in a production environment with reduced overhead. For example, a Selective Tracing session can be executed that only increases the log level for one composite, only one logger, limited to one server in the cluster and for a preset period of time. In an environment where dozens of composites are deployed this can dramatically reduce the volume and overhead of the logging without sacrificing relevance. Selective Tracing can be administered either from Enterprise Manager or through WLST. WLST provides a bit more flexibility in terms of exactly where the tracing is run. When would you use it? When there is an issue in production or another environment that lends itself to filtering by an available context criteria and increasing the log level globally results in too much overhead or irrelevant information. The information is written to the server diagnostic log and is exportable from Enterprise Manager How is it related to the other tools? Selective Tracing output is written to the server diagnostic log. This log can be collected by a system level Diagnostic Dump using DFW or through a default RDA collection. Selective Tracing also heavily leverages ODL fields to determine what to trace and to tag information that is part of a particular tracing session. Available Context Criteria: Application Name Client Address Client Host Composite Name User Name Web Service Name Web Service Port Selective Tracing Resources: Webcast Recording: SOA Diagnostics Session: Using Selective Tracing to Diagnose SOA Suite Issues How to Use Selective Tracing for SOA [ID 1367174.1] Selective Tracing WLST Reference top DMS (Dynamic Monitoring Service) DMS exposes runtime information for monitoring. This information can be monitored in two ways: Through the DMS servlet As exposed MBeans The servlet is deployed by default and can be accessed through http://<host>:<port>/dms/Spy (use administrative credentials to access). The landing page of the servlet shows identical columns of what are known as Noun Types. If you select a Noun Type you will see a table in the right frame that shows the attributes (Sensors) for the Noun Type and the available instances. SOA Suite has several exposed Noun Types that are available for viewing through the Spy servlet. Screenshots of the Spy servlet are available in the Knowledge Base article How to Monitor Runtime SOA Performance With the Dynamic Monitoring Service (DMS). Every Noun instance in the runtime is exposed as an MBean instance. As such they are generally available through an MBean browser and available for monitoring through WLDF. You can configure a WLDF Watch to monitor a particular attribute and fire a notification when the threshold is exceeded. A WLDF Watch can use the out of the box DFW notification type to notify DFW to create an Incident. When would you use it? When you want to monitor a metric or set of metrics either manually or through an automated system. When you want to trigger a WLDF Watch based on a metric exposed through DMS. How is it related to the other tools? DMS metrics can be monitored with WLDF Watches which can in turn notify DFW to create an Incident. DMS Resources: How to Monitor Runtime SOA Performance With the Dynamic Monitoring Service (DMS) [ID 1368291.1] How to Reset a SOA 11g DMS Metric DMS Documentation top ODL (Oracle Diagnostic Logging) ODL is the primary facility for most Fusion Middleware applications to log what they are doing. Whenever you change a logging level through Enterprise Manager it is ultimately exposed through ODL and written to the server diagnostic log. A notable exception to this is WebLogic Server which uses its own log format / file. ODL logs entries in a consistent, structured way using predefined fields and name/value pairs. Here's an example of a SOA Suite entry: [2012-04-25T12:49:28.083-06:00] [AdminServer] [ERROR] [] [oracle.soa.bpel.engine] [tid: [ACTIVE].ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: ] [ecid: 0963fdde7e77631c:-31a6431d:136eaa46cda:-8000-00000000000000b4,0] [errid: 41] [WEBSERVICE_PORT.name: BPELProcess2_pt] [APP: soa-infra] [composite_name: TestProject2] [J2EE_MODULE.name: fabric] [WEBSERVICE.name: bpelprocess1_client_ep] [J2EE_APP.name: soa-infra] Error occured while handling a post operation[[ When would you use it? You'll use ODL almost every time you want to identify and diagnose a problem in the environment. The entries are written to the server diagnostic log. How is it related to the other tools? The server diagnostic logs are collected by DFW and RDA. Selective Tracing writes its information to the diagnostic log as well. Additionally, DFW log conditions are triggered by ODL log events. ODL Resources: ODL Documentation top ADR (Automatic Diagnostics Repository) ADR is not a tool in and of itself but is where DFW stores the Incidents it creates. Every server in the domain has an ADR location which can be found under <SERVER_HOME>/adr. This is referred to the as the ADR 'Base' location. ADR also has what are known as 'Home' locations. Example: You have a domain called 'myDomain' and an associated managed server called 'myServer'. Your admin server is called 'AdminServer'. Your domain home directory is called 'myDomain' and it contains a 'servers' directory. The 'servers' directory contains a directory for the managed server called 'myServer' and here is where you'll find the 'adr' directory which is the ADR 'Base' location for myServer. To get to the ADR 'Home' locations we drill through a few levels: diag/ofm/myDomain/ In an 11.1.1.6 SOA Suite domain you will see 2 directories here, 'myServer' and 'soa-infra'. These are the ADR 'Home' locations. 'myServer' is the 'system' ADR home and contains system level Incidents. 'soa-infra' is the name that SOA Suite used to register with DFW and this ADR home contains SOA Suite related Incidents Each ADR home location contains a series of directories, one of which is called 'incident'. This is where your Incidents are stored. When would you use it? It's a good idea to check on these locations from time to time to see whether a lot of Incidents are being generated. They can be cleaned out by deleting the Incident directories or through the ADRCI tool. If you know that an Incident is of particular interest for an issue you're working with Oracle you can simply zip it up and provide it. How does it relate to the other tools? ADR is obviously very important for DFW since it's where the Incidents are stored. Incidents contain Diagnostic Dumps that may relate to diagnostic logs (ODL) and DMS metrics. The most recent 10 Incident directories are collected by RDA by default and ADRCI relies on the ADR locations to help manage the contents. top ADRCI (Automatic Diagnostics Repository Command Interpreter) ADRCI is a command line tool for packaging and managing Incidents. When would you use it? When purging Incidents from an ADR Home location or when you want to package an Incident along with an offline RDA collection for upload to Oracle Support. How does it relate to the other tools? ADRCI contains a tool called the Incident Packaging System or IPS. This is used to package an Incident for upload to Oracle Support through a Service Request. Starting in 11.1.1.6 IPS will attempt to collect an offline RDA collection and include it with the Incident package. This will only work if Perl is available on the path, otherwise it will give a warning and package only the Incident files. ADRCI Resources: How to Use the Incident Packaging System (IPS) in SOA 11g [ID 1381259.1] ADRCI Documentation top WLDF (WebLogic Diagnostic Framework) WLDF is functionality available in WebLogic Server since version 9. Starting with FMw 11g a link has been added between WLDF and the pre-existing DFW, the WLDF Watch Notification. Let's take a closer look at the flow: There is a need to monitor the performance of your SOA Suite message processing A WLDF Watch is created in the WLS console that will trigger if the average message processing time exceeds 2 seconds. This metric is monitored through a DMS MBean instance. The out of the box DFW Notification (the Notification is called FMWDFW-notification) is added to the Watch. Under the covers this notification is of type JMX. The Watch is triggered when the threshold is exceeded and fires the Notification. DFW has a listener that picks up the Notification and evaluates it according to its rules, etc When it comes to automatic Incident creation, WLDF is a key component with capabilities that will grow over time. When would you use it? When you want to monitor the WLS server log or an MBean metric for some condition and fire a notification when the Watch is triggered. How does it relate to the other tools? WLDF is used to automatically trigger Incident creation through DFW using the DFW Notification. WLDF Resources: How to Monitor Runtime SOA Performance With the Dynamic Monitoring Service (DMS) [ID 1368291.1] How To Script the Creation of a SOA WLDF Watch in 11g [ID 1377986.1] WLDF Documentation top

    Read the article

  • "this device cannot start error code 10" ?

    - by Neo
    I am trying to install GPS driver for gmm-u1lp.I think which is just a virtual com port driver. It just contains following INF file: [Version] Signature="$Windows NT$" Class=Ports ClassGuid={4D36E978-E325-11CE-BFC1-08002BE10318} Provider=%MTK% ;LayoutFile=layout.inf DriverVer=06/12/2007,1.0.0.1 [Manufacturer] %MTK%=MTK [MTK] %MTK3329%=Reader,USB\Vid_0e8d&Pid_3329 [Reader_Install.NTx86] ;Windows2000 [DestinationDirs] DefaultDestDir=12 Reader.NT.Copy=12 [Reader.NT] Include=mdmcpq.inf CopyFiles=FakeModemCopyFileSection AddReg=Reader.NT.AddReg [Reader.NT.AddReg] HKR,,DevLoader,,*ntkern HKR,,NTMPDriver,,usbser.sys HKR,,EnumPropPages32,,"MsPorts.dll,SerialPortPropPageProvider" [Reader.NT.Services] AddService = usbser, 0x00000002, Service_Inst [Service_Inst] DisplayName = %Serial.SvcDesc% ServiceType = 1 ; SERVICE_KERNEL_DRIVER StartType = 3 ; SERVICE_DEMAND_START ErrorControl = 1 ; SERVICE_ERROR_NORMAL ServiceBinary = %12%\usbser.sys LoadOrderGroup = Base [Strings] MTK = "Media Tek Inc." MTK3329 = "GPS USB Serial Interface Driver" Serial.SvcDesc = "GPS USB Serial Interface Driver" After installing this Inf,device details shows:"device cannot start error code 10". What is the exact problem? Do I need to test this after connecting the Device?

    Read the article

  • Sunspot / Solr full text search - how to index Rails associations

    - by Sam
    Is it possible to index through an association with Sunspot? For example, if a Customer has_many Contacts, I want a 'searchable' block on my Customer model that indexes the Contact#first_name and Contact#last_name columns for use in searches on Customer. acts_as_solr has an :include option for this. I've simply been combining the associated column names into a text field on Customer like shown below, but this doesn't seem very flexible. searchable do text :organization_name, :default_boost => 2 text :billing_address1, :default_boost => 2 text :contact_names do contacts.map { |contact| contact.to_s } end Any suggestions?

    Read the article

  • Why does Visual Studio 2008 change namespace characters in a generated file?

    - by Fredrik
    Hello, I am working on a asp.net 3.5 project in sweden, where some of the namespaces include swedish characters, such as 'å', 'ä' and 'ö'. When building the project and generating the design-file, visual studio replaces these characters with some other strange characters. This only happens when the characters occur in a namespace or class name. If a field or variable contain a swedish character, everything works fine. To clarify, the strange character occur in the design file when a namespace and/or control contain swedish characters. Does anyone know why this happens and if there is a solution for the problem that doesn't mean changing the names of the namespaces? Sincerely, Fredrik

    Read the article

  • JNA and ZBar(library for bar code reader)

    - by user219120
    I'm creating Java Interface with JNA for ZBar(library for bar code reader). In JNA, structures in C are needed to declare. For example:: // In C typedef struct { char* id; char* name; int age; char* sectionId } EMPLOYEE; to // In Java with JNA public static class Employee extends Structure { // com.sun.jna.Structure String id; String name; int age; String sectionId; } But in ZBar, structures have no members. For example:: // zbar-0.10/include/zbar.h // line:1009-1011 struct zbar_image_scanner_s; /** opaque image scanner object. */ typedef struct zbar_image_scanner_s zbar_image_scanner_t; That doesn't declare size or members of the structures. How can I write interfaces for these structures in JNA?

    Read the article

  • Is it a good object-oriented-design practice to send a pointer to private data to another class?

    - by Denis
    Hello everyone, There is well known recommendation not to include into class interface method that returns a pointer (or a reference) to private data of the class. But what do you think about public method of a class that sends to another class a pointer to the private data of the first one. For example: class A { public: void fA(void) {_b.fB(&_var)}; private: B _b; int _var; }; I think that it is some sort of data hiding damage: the private data define state of their own class, so why should one class delegate changes of its own state to another one? What do you think? Denis

    Read the article

  • How to make QT support HTML 5 database?

    - by Mickey Shine
    I am using Qt 4.7.1 and embedded a webview in my app. But I got the following error when trying to visit http://webkit.org/demos/sticky-notes/ to test the HTML 5 database feature Failed to open the database on disk. This is probably because the version was bad or there is not enough space left in this domain's quota I compiled my static Qt library with the following command: configure --prefix=/usr/local/qt-static-release-db --accessibility --multimedia --audio-backend --svg --webkit --javascript-jit --script --scripttools --declarative --release -nomake examples -nomake demos --static --openssl -I /usr/local/ssl/include -L /usr/local/ssl/lib -confirm-license -sql-qsqlite -sql-qmysql -sql-qodbc

    Read the article

  • JavaScript Same Origin Policy - How does it apply to different subdomains

    - by DaveDev
    How does the Same Origin Policy apply to the following two domains? http://server1.MyDomain.com http://server2.MyDomain.com Can I run JS on a page hosted on server1, if the content is retreived from server2? edit according to Daniel's answer below, I can include scripts between different subdomains using the <script> tag, but what about asynchronous requests? What if I download a script from server2 onto the page hosted on server1. Can I use the script to communicate asynchronously with a service on server2?

    Read the article

  • Cross-platform (microcontroller-PC) algorithm development

    - by Kyr
    Hello people! I was asked to develop a algorithm for network application on C. This project will be developed on Linux for PC and then it will be transferred to a more portable platform, something that will include a microcontroller. There are many microcontroller/companies out there that provide very nice and large libraries for TCP/IP. This software will hold statistics on the network performance. The whole idea of a cross platform (uC - PC) seems rubbish to me cause eventually the code should be written in a more platform specific way for the microcontroller, but I am not expert to judge anyway. Is there any clever way of doing this or is there a anyone that did this before? My brainstorming has "Wrapper library" and "Matlab"... Any ideas? Thx!

    Read the article

  • MERGE Bug with Filtered Indexes

    - by Paul White
    A MERGE statement can fail, and incorrectly report a unique key violation when: The target table uses a unique filtered index; and No key column of the filtered index is updated; and A column from the filtering condition is updated; and Transient key violations are possible Example Tables Say we have two tables, one that is the target of a MERGE statement, and another that contains updates to be applied to the target.  The target table contains three columns, an integer primary key, a single character alternate key, and a status code column.  A filtered unique index exists on the alternate key, but is only enforced where the status code is ‘a’: CREATE TABLE #Target ( pk integer NOT NULL, ak character(1) NOT NULL, status_code character(1) NOT NULL,   PRIMARY KEY (pk) );   CREATE UNIQUE INDEX uq1 ON #Target (ak) INCLUDE (status_code) WHERE status_code = 'a'; The changes table contains just an integer primary key (to identify the target row to change) and the new status code: CREATE TABLE #Changes ( pk integer NOT NULL, status_code character(1) NOT NULL,   PRIMARY KEY (pk) ); Sample Data The sample data for the example is: INSERT #Target (pk, ak, status_code) VALUES (1, 'A', 'a'), (2, 'B', 'a'), (3, 'C', 'a'), (4, 'A', 'd');   INSERT #Changes (pk, status_code) VALUES (1, 'd'), (4, 'a');          Target                     Changes +-----------------------+    +------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦ ¦  1 ¦ A  ¦ a           ¦    ¦  1 ¦ d           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+ ¦  4 ¦ A  ¦ d           ¦ +-----------------------+ The target table’s alternate key (ak) column is unique, for rows where status_code = ‘a’.  Applying the changes to the target will change row 1 from status ‘a’ to status ‘d’, and row 4 from status ‘d’ to status ‘a’.  The result of applying all the changes will still satisfy the filtered unique index, because the ‘A’ in row 1 will be deleted from the index and the ‘A’ in row 4 will be added. Merge Test One Let’s now execute a MERGE statement to apply the changes: MERGE #Target AS t USING #Changes AS c ON c.pk = t.pk WHEN MATCHED AND c.status_code <> t.status_code THEN UPDATE SET status_code = c.status_code; The MERGE changes the two target rows as expected.  The updated target table now contains: +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦ ¦  1 ¦ A  ¦ d           ¦ <—changed from ‘a’ ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ a           ¦ <—changed from ‘d’ +-----------------------+ Merge Test Two Now let’s repopulate the changes table to reverse the updates we just performed: TRUNCATE TABLE #Changes;   INSERT #Changes (pk, status_code) VALUES (1, 'a'), (4, 'd'); This will change row 1 back to status ‘a’ and row 4 back to status ‘d’.  As a reminder, the current state of the tables is:          Target                        Changes +-----------------------+    +------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦ ¦  1 ¦ A  ¦ d           ¦    ¦  1 ¦ a           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ d           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+ ¦  4 ¦ A  ¦ a           ¦ +-----------------------+ We execute the same MERGE statement: MERGE #Target AS t USING #Changes AS c ON c.pk = t.pk WHEN MATCHED AND c.status_code <> t.status_code THEN UPDATE SET status_code = c.status_code; However this time we receive the following message: Msg 2601, Level 14, State 1, Line 1 Cannot insert duplicate key row in object 'dbo.#Target' with unique index 'uq1'. The duplicate key value is (A). The statement has been terminated. Applying the changes using UPDATE Let’s now rewrite the MERGE to use UPDATE instead: UPDATE t SET status_code = c.status_code FROM #Target AS t JOIN #Changes AS c ON t.pk = c.pk WHERE c.status_code <> t.status_code; This query succeeds where the MERGE failed.  The two rows are updated as expected: +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦ ¦  1 ¦ A  ¦ a           ¦ <—changed back to ‘a’ ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ d           ¦ <—changed back to ‘d’ +-----------------------+ What went wrong with the MERGE? In this test, the MERGE query execution happens to apply the changes in the order of the ‘pk’ column. In test one, this was not a problem: row 1 is removed from the unique filtered index by changing status_code from ‘a’ to ‘d’ before row 4 is added.  At no point does the table contain two rows where ak = ‘A’ and status_code = ‘a’. In test two, however, the first change was to change row 1 from status ‘d’ to status ‘a’.  This change means there would be two rows in the filtered unique index where ak = ‘A’ (both row 1 and row 4 meet the index filtering criteria ‘status_code = a’). The storage engine does not allow the query processor to violate a unique key (unless IGNORE_DUP_KEY is ON, but that is a different story, and doesn’t apply to MERGE in any case).  This strict rule applies regardless of the fact that if all changes were applied, there would be no unique key violation (row 4 would eventually be changed from ‘a’ to ‘d’, removing it from the filtered unique index, and resolving the key violation). Why it went wrong The query optimizer usually detects when this sort of temporary uniqueness violation could occur, and builds a plan that avoids the issue.  I wrote about this a couple of years ago in my post Beware Sneaky Reads with Unique Indexes (you can read more about the details on pages 495-497 of Microsoft SQL Server 2008 Internals or in Craig Freedman’s blog post on maintaining unique indexes).  To summarize though, the optimizer introduces Split, Filter, Sort, and Collapse operators into the query plan to: Split each row update into delete followed by an inserts Filter out rows that would not change the index (due to the filter on the index, or a non-updating update) Sort the resulting stream by index key, with deletes before inserts Collapse delete/insert pairs on the same index key back into an update The effect of all this is that only net changes are applied to an index (as one or more insert, update, and/or delete operations).  In this case, the net effect is a single update of the filtered unique index: changing the row for ak = ‘A’ from pk = 4 to pk = 1.  In case that is less than 100% clear, let’s look at the operation in test two again:          Target                     Changes                   Result +-----------------------+    +------------------+    +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦    ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦    ¦----+----+-------------¦ ¦  1 ¦ A  ¦ d           ¦    ¦  1 ¦ d           ¦    ¦  1 ¦ A  ¦ a           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ a           ¦    ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+    ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ a           ¦                            ¦  4 ¦ A  ¦ d           ¦ +-----------------------+                            +-----------------------+ From the filtered index’s point of view (filtered for status_code = ‘a’ and shown in nonclustered index key order) the overall effect of the query is:   Before           After +---------+    +---------+ ¦ pk ¦ ak ¦    ¦ pk ¦ ak ¦ ¦----+----¦    ¦----+----¦ ¦  4 ¦ A  ¦    ¦  1 ¦ A  ¦ ¦  2 ¦ B  ¦    ¦  2 ¦ B  ¦ ¦  3 ¦ C  ¦    ¦  3 ¦ C  ¦ +---------+    +---------+ The single net change there is a change of pk from 4 to 1 for the nonclustered index entry ak = ‘A’.  This is the magic performed by the split, sort, and collapse.  Notice in particular how the original changes to the index key (on the ‘ak’ column) have been transformed into an update of a non-key column (pk is included in the nonclustered index).  By not updating any nonclustered index keys, we are guaranteed to avoid transient key violations. The Execution Plans The estimated MERGE execution plan that produces the incorrect key-violation error looks like this (click to enlarge in a new window): The successful UPDATE execution plan is (click to enlarge in a new window): The MERGE execution plan is a narrow (per-row) update.  The single Clustered Index Merge operator maintains both the clustered index and the filtered nonclustered index.  The UPDATE plan is a wide (per-index) update.  The clustered index is maintained first, then the Split, Filter, Sort, Collapse sequence is applied before the nonclustered index is separately maintained. There is always a wide update plan for any query that modifies the database. The narrow form is a performance optimization where the number of rows is expected to be relatively small, and is not available for all operations.  One of the operations that should disallow a narrow plan is maintaining a unique index where intermediate key violations could occur. Workarounds The MERGE can be made to work (producing a wide update plan with split, sort, and collapse) by: Adding all columns referenced in the filtered index’s WHERE clause to the index key (INCLUDE is not sufficient); or Executing the query with trace flag 8790 set e.g. OPTION (QUERYTRACEON 8790). Undocumented trace flag 8790 forces a wide update plan for any data-changing query (remember that a wide update plan is always possible).  Either change will produce a successfully-executing wide update plan for the MERGE that failed previously. Conclusion The optimizer fails to spot the possibility of transient unique key violations with MERGE under the conditions listed at the start of this post.  It incorrectly chooses a narrow plan for the MERGE, which cannot provide the protection of a split/sort/collapse sequence for the nonclustered index maintenance. The MERGE plan may fail at execution time depending on the order in which rows are processed, and the distribution of data in the database.  Worse, a previously solid MERGE query may suddenly start to fail unpredictably if a filtered unique index is added to the merge target table at any point. Connect bug filed here Tests performed on SQL Server 2012 SP1 CUI (build 11.0.3321) x64 Developer Edition © 2012 Paul White – All Rights Reserved Twitter: @SQL_Kiwi Email: [email protected]

    Read the article

  • MongoDB, Carrierwave, GridFS and prevention of files' duplication

    - by Arkan
    I am dealing with Mongoid, carrierwave and gridFS to store my uploads. For example, I have a model Article, containing a file upload(a picture). class Article include Mongoid::Document field :title, :type => String field :content, :type => String mount_uploader :asset, AssetUploader end But I would like to only store the file once, in the case where I'll upload many times the same file for differents articles. I saw GridFS has a MD5 checksum. What would be the best way to prevent duplication of identicals files ? Thanks

    Read the article

  • Make Custom Project template in Eclipse IDE

    - by Mohit Deshpande
    I have been using Eclipse IDE for a long time. Its a really great IDE for Java/C/C++ (and other languages with its THOUSANDS of plugins). Every once in a while, I get the need for creating a Javax interface. To do this normally, I would setup the new java project then add what I need. But, wouldn't it be nice if I could just make a template project to automatically include the code for the files. How would I go about doing this? It it even possible? The Eclipse CDT can make a new project type. So can the Google ADT and Google App engine. So I would imagine it is possible. But how?

    Read the article

  • Undefined reference to cmph functions even after installing cpmh library

    - by user1242145
    I am using gcc 4.4.3 on ubuntu. I installed cmph library tools 0.9-1 using command sudo apt-get install libcmph-tools Now, when I tried to compile example program vector_adapter_ex1.c , gcc is able to detect cmph.h library in its include file but is showing multiple errors like vector_adapter_ex1.c:(.text+0x93): undefined reference to cmph_io_vector_adapter' vector_adapter_ex1.c:(.text+0xa3): undefined reference tocmph_config_new' vector_adapter_ex1.c:(.text+0xbb): undefined reference to cmph_config_set_algo' vector_adapter_ex1.c:(.text+0xcf): undefined reference tocmph_config_set_mphf_fd' even though, these are all defined in the source code of the cmph library. Could anyone tell the error that might have occurred or suggest an alternate method to go about building minimal perfect hash functions.

    Read the article

< Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >