Search Results

Search found 14080 results on 564 pages for 'known types'.

Page 247/564 | < Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >

  • How to exclude classes exported for actionscript in AsDoc

    - by Bartvbl
    I am trying to use AsDoc on the code of one of my projects in flash. Some of the classes that I use inside the code are "exported for actionscript" classes from my FLA file. Of course the AsDoc compiler complains that those classes are not defined anywhere. Does anyone know how to make it ignore these class types?

    Read the article

  • problem with filtering the dropdownlist in scroll window

    - by Rahul
    Hi all, Problem with filtering Dropdown list. The scenario is : in scroll window there are two fields Document types and type id: Document type is Dropdown list: As per document type selection type id look should display the values. For ex. If I select quote type from document type and if I open type id look up it should display only quotation in the look window. It should work for all the values of document type drop down list values. Its working fine. The item in the document types are: Quote, Order, Invoice, Return, BackOrder. The problem is after saving the data when I am displaying the same record in scroll window, suppose after displaying document type is QUOTE and document id is QTOARD, and in this position I am changing the document type from dropdown QUOTE to ORDER at this time warning message should c ome this range entered is in valid. Because in database table there is no document QTOARD for ORDER type. The same should work for all the condition. The table name is SOP_ID_Setp and key is SOP Type and DocumentID. For that I have written the Stored procedure : create procedure DocTypeFilter @DocumentType as int, @DocumentID as varchar(30) as --declare --@documentype int, --@documentID varchar(30), select * from sop40200 where soptype=@DocumentType and docid=@DocumentID and I have called this SP in Dropdownlist change event. local long retcode; range clear table SOP_ID_SETP; clear field 'SOP Type' of table SOP_ID_SETP; clear field 'Document ID' of table SOP_ID_SETP; range start table SOP_ID_SETP; fill field 'SOP Type' of table SOP_ID_SETP; fill field 'Document ID' of table SOP_ID_SETP; range end table SOP_ID_SETP; if err()=OKAY then call DocTypeFilter,retcode,'Document Type' of window 'Is_Document Type Site_Scroll','Document ID' of window 'Is_Document Type Site_Scroll'; else warning "The range entered is invalid"; clear window 'Is_Document Type Site_Scroll'; fill window 'Is_Document Type Site_Scroll' table is_sop_site_line_temp; end if; Above code not giving the expected output any help pls.

    Read the article

  • Is SQL Server DRI (ON DELETE CASCADE) slow?

    - by Aaronaught
    I've been analyzing a recurring "bug report" (perf issue) in one of our systems related to a particularly slow delete operation. Long story short: It seems that the CASCADE DELETE keys were largely responsible, and I'd like to know (a) if this makes sense, and (b) why it's the case. We have a schema of, let's say, widgets, those being at the root of a large graph of related tables and related-to-related tables and so on. To be perfectly clear, deleting from this table is actively discouraged; it is the "nuclear option" and users are under no illusions to the contrary. Nevertheless, it sometimes just has to be done. The schema looks something like this: Widgets | +--- Anvils (1:1) | | | +--- AnvilTestData (1:N) | +--- WidgetHistory (1:N) | +--- WidgetHistoryDetails (1:N) Nothing too scary, really. A Widget can be different types, an Anvil is a special type, so that relationship is 1:1 (or more accurately 1:0..1). Then there's a large amount of data - perhaps thousands of rows of AnvilTestData per Anvil collected over time, dealing with hardness, corrosion, exact weight, hammer compatibility, usability issues, and impact tests with cartoon heads. Then every Widget has a long, boring history of various types of transactions - production, inventory moves, sales, defect investigations, RMAs, repairs, customer complaints, etc. There might be 10-20k details for a single widget, or none at all, depending on its age. So, unsurprisingly, there's a CASCADE DELETE relationship at every level here. If a Widget needs to be deleted, it means something's gone terribly wrong and we need to erase any records of that widget ever existing, including its history, test data, etc. Again, nuclear option. Relations are all indexed, statistics are up to date. Normal queries are fast. The system tends to hum along pretty smoothly for everything except deletes. Getting to the point here, finally, for various reasons we only allow deleting one widget at a time, so a delete statement would look like this: DELETE FROM Widgets WHERE WidgetID = @WidgetID Pretty simple, innocuous looking delete... that takes over 2 minutes to run, for a widget with no data! After slogging through execution plans I was finally able to pick out the AnvilTestData and WidgetHistoryDetails deletes as the sub-operations with the highest cost. So I experimented with turning off the CASCADE (but keeping the actual FK, just setting it to NO ACTION) and rewriting the script as something very much like the following: DECLARE @AnvilID int SELECT @AnvilID = AnvilID FROM Anvils WHERE WidgetID = @WidgetID DELETE FROM AnvilTestData WHERE AnvilID = @AnvilID DELETE FROM WidgetHistory WHERE HistoryID IN ( SELECT HistoryID FROM WidgetHistory WHERE WidgetID = @WidgetID) DELETE FROM Widgets WHERE WidgetID = @WidgetID Both of these "optimizations" resulted in significant speedups, each one shaving nearly a full minute off the execution time, so that the original 2-minute deletion now takes about 5-10 seconds - at least for new widgets, without much history or test data. Just to be absolutely clear, there is still a CASCADE from WidgetHistory to WidgetHistoryDetails, where the fanout is highest, I only removed the one originating from Widgets. Further "flattening" of the cascade relationships resulted in progressively less dramatic but still noticeable speedups, to the point where deleting a new widget was almost instantaneous once all of the cascade deletes to larger tables were removed and replaced with explicit deletes. I'm using DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE before each test. I've disabled all triggers that might be causing further slowdowns (although those would show up in the execution plan anyway). And I'm testing against older widgets, too, and noticing a significant speedup there as well; deletes that used to take 5 minutes now take 20-40 seconds. Now I'm an ardent supporter of the "SELECT ain't broken" philosophy, but there just doesn't seem to be any logical explanation for this behaviour other than crushing, mind-boggling inefficiency of the CASCADE DELETE relationships. So, my questions are: Is this a known issue with DRI in SQL Server? (I couldn't seem to find any references to this sort of thing on Google or here in SO; I suspect the answer is no.) If not, is there another explanation for the behaviour I'm seeing? If it is a known issue, why is it an issue, and are there better workarounds I could be using?

    Read the article

  • "RFC 2833 RTP Event" Consecutive Events and the E "End" Bit

    - by brian_d
    Hello, I can send out a RFC 2833 dtmf event as outlined at http://www.ietf.org/rfc/rfc2833.txt When I do set the E "End" bit, but leave it as 0, I get the following behaviour: If for example keys 7874556332111111145855885#3 were pressed, then ALL events would be sent and show up in a program like wireshark, however only 87456321458585#3 would sound. So the first key (which I figure could be a separate issue) and any repeats of an event (ie 11111) are failing to sound. In section 3.9, figure 2 of the above linked document, they give a 911 example. Here all but the last event have the E bit set. When I set the bit for all numbers, I never get an event to sound. I have thought of a couple possible thing but do not know if they are the reason: 1) figure 2 shows payload types of 96 and 97 sent. I have not nor know how to exactly. In section 3.8, codes 96 and 97 are described as "the dynamic payload types 96 and 97 have been assigned for the redundancy mechanism and the telephone event payload respectively" 2) In section 3.5, "E:", "A sender MAY delay setting the end bit until retransmitting the last packet for a tone, rather than on its first transmission" Does anyone have an idea of how to actually do this? I have also fiddled around with timestamp intervals and the RTP marker. Any help is greatly appreciated. Here is a sample wireshark event capture of the relevant areas: 6590 31.159045000 xx.x.x.xxx --.--.---.-- RTP EVENT Payload type=RTP Event, DTMF Pound # (end) Real-Time Transport Protocol Stream setup by SDP (frame 6225) Setup frame: 6225 Setup Method: SDP 10.. .... = Version: RFC 1889 Version (2) ..0. .... = Padding: False ...0 .... = Extension: False .... 0000 = Contributing source identifiers count: 0 0... .... = Marker: False Payload type: telephone-event (101) Sequence number: 0 Extended sequence number: 65536 Timestamp: 0 Synchronization Source identifier: 0x15f27104 (368210180) RFC 2833 RTP Event Event ID: DTMF Pound # (11) 1... .... = End of Event: True .0.. .... = Reserved: False ..00 0000 = Volume: 0 Event Duration: 2048

    Read the article

  • ItemsControl ItemTemplate Binding

    - by Wonko the Sane
    Hi All, In WPF4.0, I have a class that contains other class types as properties (combining multiple data types for display). Something like: public partial class Owner { public string OwnerName { get; set; } public int OwnerId { get; set; } } partial class ForDisplay { public Owner OwnerData { get; set; } public int Credit { get; set; } } In my window, I have an ItemsControl with the following (clipped for clarity): <ItemsControl ItemsSource={Binding}> <ItemsControl.ItemTemplate> <DataTemplate> <local:MyDisplayControl OwnerName={Binding OwnerData.OwnerName} Credit={Binding Credit} /> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> I then get a collection of display information from the data layer, and set the DataContext of the ItemsControl to this collection. The "Credit" property gets displayed correctly, but the OwnerName property does not. Instead, I get a binding error: Error 40: BindingExpression path error: 'OwnerName' property not found on 'object' ''ForDisplay' (HashCode=449124874)'. BindingExpression:Path=OwnerName; DataItem='ForDisplay' (HashCode=449124874); target element is 'TextBlock' (Name=txtOwnerName'); target property is 'Text' (type 'String') I don't understand why this is attempting to look for the OwnerName property in the ForDisplay class, rather than in the Owner class from the ForDisplay OwnerData property. Edit It appears that it has something to do with using the custom control. If I bind the same properties to a TextBlock, they work correctly. <ItemsControl ItemsSource={Binding}> <ItemsControl.ItemTemplate> <DataTemplate> <StackPanel> <local:MyDisplayControl OwnerName={Binding OwnerData.OwnerName} Credit={Binding Credit} /> <TextBlock Text="{Binding OwnerData.OwnerName}" /> <TextBlock Text="{Binding Credit}" /> </StackPanel> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> Thanks, wTs

    Read the article

  • Puppet nodes cant' find master, ec2 public versus internal ip addresses and hosts files

    - by Blankman
    If I setup my hosts files such that they reference all other ec2 nodes using the internal ip addresses, will this work or do I have to use the external ip addresses? Do I need to specify anything in my security group to get internal ip addresses to work? e.g. /etc/hosts ip-10-11-12-13.internal some_node_name If I do this, can I reference some_node_name anywhere in my scripts where I would have used the ip address previously? On my puppet agent servers, I have a reference to my puppet master like: public-ip-here puppet When I reboot my puppet agent's, syslog shows they couldn't find the master with the message: getaddinfo : name or service not known I did get it to work by updating /etc/default/puppet and I added to the options: --server=public-ip-here From what I read, puppet will by default try using 'puppet', and I set this in my hosts file so why wouldn't it be picking this up?

    Read the article

  • How can Core Data store an NSData?

    - by mystify
    The documentation says, that core data properties can only store NSString, NSNumber and NSDate types. However, a lot of Core Data users claim Core Data could also store an NSData type. But I wasn't able to see that in the documentation, although the Xcode Data Modeler allows to choose a data type called "binary" (which seems to be NSData). Did I miss something? Is there a hidden place in the documentation that indeed lists NSData for binary stuff?

    Read the article

  • In Ruby, how to I read memory values from an external process?

    - by grg-n-sox
    So all I simply want to do is make a Ruby program that reads some values from known memory address in another process's virtual memory. Through my research and basic knowledge of hex editing a running process's x86 assembly in memory, I have found the base address and offsets for the values in memory I want. I do not want to change them; I just want to read them. I asked a developer of a memory editor how to approach this abstract of language and assuming a Windows platform. He told me the Win32API calls for OpenProcess, CreateProcess, ReadProcessMemory, and WriteProcessMemory were the way to go using either C or C++. I think that the way to go would be just using the Win32API class and mapping two instances of it; One for either OpenProcess or CreateProcess, depending on if the user already has th process running or not, and another instance will be mapped to ReadProcessMemory. I probably still need to find the function for getting the list of running processes so I know which running process is the one I want if it is running already. This would take some work to put all together, but I am figuring it wouldn't be too bad to code up. It is just a new area of programming for me since I have never worked this low level from a high level language (well, higher level than C anyways). I am just wondering of the ways to approach this. I could just use a bunch or Win32API calls, but that means having to deal with a bunch of string and array pack and unpacking that is system dependant I want to eventually make this work cross-platform since the process I am reading from is produced from an executable that has multiple platform builds, (I know the memory address changes from system to system. The idea is to have a flat file that contains all memory mappings so the Ruby program can just match the current platform environment to the matching memory mapping.) but from the looks of things I'll just have to make a class that wraps whatever is the current platform's system shared library memory related function calls. For all I know, there could already exist a Ruby gem that takes care of all of this for me that I am just not finding. I could also possibly try editing the executables for each build to make it so whenever the memory values I want to read from are written to by the process, it also writes a copy of the new value to a space in shared memory that I somehow have Ruby make an instance of a class that is a pointer under the hood to that shared memory address and somehow signal to the Ruby program that the value was updated and should be reloaded. Basically a interrupt based system would be nice, but since the purpose of reading these values is just to send to a scoreboard broadcasted from a central server, I could just stick to a polling based system that sends updates at fixed time intervals. I also could just abandon Ruby altogether and go for C or C++ but I do not know those nearly as well. I actually know more x86 than C++ and I only know C as far as system independent ANSI C and have never dealt with shared system libraries before. So is there a gem or lesser known module available that has already done this? If not, then any additional information as to how to accomplish this would be nice. I guess, long story short, how do I do all this? Thanks in advance, Grg PS: Also a confirmation that those Win32API calls should be aimed at the kernel32.dll library would be nice.

    Read the article

  • Examples of usability disasters?

    - by Willie Wheeler
    Anybody have good examples of usability disasters? Here's an example. Hector is a manager with a large team. Department admin wants to send Hector a spreadsheet with his team's salaries. She types "Hector" in the Outlook "To:" field. It autocompletes to "Hector's Team" but she doesn't notice that until after she sends it.

    Read the article

  • How to add action related to a business object in Metawidget?

    - by Gulcan
    I use Netbeans 6.8 and try to obtain user interface by using metawidget and JPA. I cannot say @Action public void save( ActionEvent event ) { mSearchMetawidget.save(); } This annotation gives "incompatible types" error when I add following import. import org.metawidget.inspector.impl.actionstyle.Action; What to do? How can I add an action related to my User entity. I want to do "register" action. Thanks in advance

    Read the article

  • Convert List of one type to Array of another type using Dozer

    - by aheu
    I'm wondering how to convert a List of one type to an array of another type in Java using Dozer. The two types have all the same property names/types. For example, consider these two classes. public class A{ private String test = null; public String getTest(){ return this.test } public void setTest(String test){ this.test = test; } } public class B{ private String test = null; public String getTest(){ return this.test } public void setTest(String test){ this.test = test; } } I've tried this with no luck. List<A> listOfA = getListofAObjects(); Mapper mapper = DozerBeanMapperSingletonWrapper.getInstance(); B[] bs = mapper.map(listOfA, B[].class); I've also tried using the CollectionUtils class. CollectionUtils.convertListToArray(listOfA, B.class) Neither are working for me, can anyone tell me what I am doing wrong? The mapper.map function works fine if I create two wrapper classes, one containing a List and the other a b[]. See below: public class C{ private List<A> items = null; public List<A> getItems(){ return this.items; } public void setItems(List<A> items){ this.items = items; } } public class D{ private B[] items = null; public B[] getItems(){ return this.items; } public void setItems(B[] items){ this.items = items; } } This works oddly enough... List<A> listOfA = getListofAObjects(); C c = new C(); c.setItems(listOfA); Mapper mapper = DozerBeanMapperSingletonWrapper.getInstance(); D d = mapper.map(listOfA, D.class); B[] bs = d.getItems(); How do I do what I want to do without using the wrapper classes (C & D)? There has got to be an easier way... Thanks!

    Read the article

  • Minimize the chance my email is blocked/filtered as spam

    - by justSteve
    I'm running a web-based store where order confirmations are sometimes blocked and don't reach the intended user. The structure of the business model is such that our product is marketed to the end-user by a 3rd parities - affiliates how are known entities to the end-users and email is freely exchanged between end-users and our affiliates. Our confirmations being blocked is becoming a big enough problem that we are considering implementing a system where a 'confirmations' address is created within the affiliates domain, then we'd have our app send via the affiliate's mail server instead of our own. But that'd be lots of work. The idea has been raised to have our app use our affiliates' email in the FROM field but still send from our server. My thinking is that would be detected at the end-users side and blocked just as often - we dealing with institutions large enough at least some checks up at the perimeter. Is this assumption correct (more likely to be blocked) or is there a less round about way to send messages under the auspices of 3rd parties? thx

    Read the article

  • Is there a compiled* programming language with dynamic, maybe even weak typing?

    - by sub
    I wondered if there is a programming language which compiles to machine code/binary (not bytecode then executed by a VM, that's something completely different when considering typing) that features dynamic and/or weak typing, e.g: Think of a compiled language where: Variables don't need to be declared Variables can be created doing runtime Functions can return values of different types Questions: Is there such a programming language? (Why) not? I think that a dynamically yet strong typed, compiled language would really sense, but is it possible?

    Read the article

  • Moving VMWare Fusion image to Boot Camp

    - by Kristopher Johnson
    I have Windows 7 64-bit running in VMWare Fusion on my MacBook, but am disappointed with the performance, and so I want to try Boot Camp. However, I'd like to avoid reinstalling Windows and all my applications; I just want to somehow copy my VMWare Fusion "disk image" to a Boot Camp partition. My initial thoughts are that I should be able to run a Windows backup program in VMWare Fusion to back up the entire virtual disk, then set up Boot Camp and restore from that backup. However, Googling finds a few posts by people who have tried that and have encountered problems. So, is there a "known good" procedure for doing this?

    Read the article

  • NoSQL DB for .Net document-based database (ECM)

    - by Dane
    I'm halfway through coding a basic multi-tenant SaaS ECM solution. Each client has it's own instance of the database / datastore, but the .Net app is single instance. The documents are pretty much read only (i.e. an image archive of tiffs or PDFs) I've used MSSQL so far, but then started thinking this might be viable in a NoSQL DB (e.g. MongoDB, CouchDB). The basic premise is that it stores documents, each with their own particular indexes. Each tenant can have multiple document types. e.g. One tenant might have an invoice type, which has Customer ID, Invoice Number and Invoice Date. Another tenant might have an application form, which has Member Number, Application Number, Member Name, and Application Date. So far I've used the old method which Sharepoint (used?) to use, and created a document table which has int_field_1, int_field_2, date_field_1, date_field_2, etc. Then, I've got a "mapping" table which stores the customer specific index name, and the database field that will map to. I've avoided the key-value pair model in the DB due to volume of documents. This way, we can support multiple document types in the one table, and get reasonably high performance out of it, and allow for custom document type searches (i.e. user selects a document type, then they're presented with a list of search fields). However, a NoSQL DB might make this a lot simpler, as I don't need to worry about denormalizing the document. However, I've just got concerns about the rest of the data around a document. We store an "action history" against the document. This tracks views, whether someone emails the document from within the system, and other "future" functionality (e.g. faxing). We have control over the document load process, so we can manipulate the data however it needs to be to get it in the document store (e.g. assign unique IDs). Users will not be adding in their own documents, so we shouldn't need to worry about ACID compliance, as the documents are relatively static. So, my questions I guess : Is a NoSQL DB a good fit Is MongoDB the best for Asp.Net (I saw Raven and Velocity, but they're still kinda beta) Can I store a key for each document, and then store the action history in a MSSQL DB with this key? I don't need to do joins, it would be if a person clicks "View History" against a document. How would performance compare between the two (NoSQL DB vs denormalized "document" table) Volumes would be up to 200,000 new documents per month for a single tenant. My current scaling plan with the SQL DB involves moving the SQL DB into a cluster when certain thresholds are reached, and then reviewing partitioning and indexing structures.

    Read the article

  • Most Astonishing Violation of the Principle of Least Astonishment

    - by Adam Liss
    The Principle of Least Astonishment suggests that a system should operate as a user would expect it to, as much as possible. In other words, it should never "astonish" the user with unexpected behavior. In your experience as the "astonishee," what types of systems are the worst offenders, and if you were the project manager, how would you correct the problem? Bonus if your answer describes how you'd retrain the developers!

    Read the article

  • Cisco ASA - VPN and Hairpinning....

    - by Nordberg
    Hi, We have 2 sites that will be linked by a IPSEC VPN between 2 Cisco ASAs: Site 1 8Mb ADSL Connection Cisco ASA 505 Site 2 2Mb SDSL Connection Cisco ASA 505 Basically, both sites need access to a service at the end of another IPSEC VPN, Site 3, which I plan to terminate at Site 2. This is due to the way the service is sold - it's billed per gateway. So if both Site 1 and Site 2 had their own VPN connection to Site 3, it would cost us twice as much... Anyway, my idea is to have all traffic from Site 1 destined for Site 3 to go via the VPN between Site 1 and Site 2. The end result being all traffic that hits Site 3 has come via Site 2. I understand this is known as hairpinning but I'm struggling to find a great deal of information on how this is setup. So, firstly, can anyone confirm that what I'm trying to achieve is possible and, secondly, can anyone point me in the direction of an example of such a configuration? Many Thanks.

    Read the article

  • homebrew in mac lion

    - by user975352
    I'm beginner of mac lion(10.7.2). I don't know well about mac but ubuntu. I installed homebrew to my mac, and I did command below. $ brew install git and then $ brew update error: Could not resolve host: github.com; nodename nor servname provided, or not known while accessing https://github.com/mxcl/homebrew.git/info/refs fatal: HTTP request failed Error: Failed while executing git pull origin refs/heads/master:refs/remotes/origin/master What's happen in my mac? How to resolve this? Would you help me?

    Read the article

  • Logic Circuits & Shift Registers?

    - by Thomas Covenant
    Hey all, Could anyone point me to a logical diagram of, or show me how to create, a Parrallel In/Serial Out shift register that uses J-K Flip flops? I've found diagrams that use D types, but no J-K's. Any help would be greatly appreciated. Thanks.

    Read the article

< Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >