Search Results

Search found 9124 results on 365 pages for 'big sal'.

Page 298/365 | < Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >

  • how to make accessor for Dictionary in a way that returned Dictionary cannot be changed C# / 2.0

    - by matti
    I thought of solution below because the collection is very very small. But what if it was big? private Dictionary<string, OfTable> _folderData = new Dictionary<string, OfTable>(); public Dictionary<string, OfTable> FolderData { get { return new Dictionary<string,OfTable>(_folderData); } } With List you can make: public class MyClass { private List<int> _items = new List<int>(); public IList<int> Items { get { return _items.AsReadOnly(); } } } That would be nice! Thanks in advance, Cheers & BR - Matti NOW WHEN I THINK THE OBJECTS IN COLLECTION ARE IN HEAP. SO MY SOLUTION DOES NOT PREVENT THE CALLER TO MODIFY THEM!!! CAUSE BOTH Dictionary s CONTAIN REFERENCES TO SAME OBJECT. DOES THIS APPLY TO List EXAMPLE ABOVE? class OfTable { private string _wTableName; private int _table; private List<int> _classes; private string _label; public OfTable() { _classes = new List<int>(); } public int Table { get { return _table; } set { _table = value; } } public List<int> Classes { get { return _classes; } set { _classes = value; } } public string Label { get { return _label; } set { _label = value; } } } so how to make this immutable??

    Read the article

  • Can I use foreign key restrictions to return meaningful UI errors with PHP

    - by Shane
    I want to start by saying that I am a big fan of using foreign keys and have a tendency to use them even on small projects to keep my database from being filled with orphaned data. On larger projects I end up with gobs of keys which end up covering upwards of 8 - 10 layers of data. I want to know if anyone could suggest a graceful way of handling 'expected errors' from the MySQL database in a way that I can construct meaningful messages for the end user. I will explain 'expected errors' with an example. Lets say I have a set of tables used for basic discussions: discussion questions responses users Hierarchically they would probably look something like this: -users --discussion ---questions ----responses When I attempt to delete a user the FKs will check discussions and if any discussion exist the deletion is restricted, deleting discussion checks questions, deleting questions checks responses. An 'expected error' in this case would be attempting to delete a user--unless they are newly created I can anticipate that one or more foreign keys will fail causing an error. What I WANT to do is to catch that error on deletion and be able to tell the end user something like 'We're sorry, but all discussions must be removed before you can delete this user...'. Now I know I can keep and maintain matching arrays in PHP and map specific errors to messages but that is messy and prone to becoming stagnant, or I could manually run a set of selects prior to attempting the deletion, but then I am doing just as much work as without using FKs. Any help here would be greatly appreciated, or if I am just looking at this completely wrong then please let me know. On a side note I generally use CodeIgniter for my application development, so if that would open up an avenue through that framework please consider that in your answers. Thanks in Advance

    Read the article

  • Hover-to-click on jQuery UI datePicker 'next month' and 'prev month' not working

    - by user316727
    Hi there, I have a calendar which is meant to look much like the calendar in Outlook. There is a big field representing the hours in a day, and there is a date navigator. The navigator is the jQuery UI Datepicker. I want users to be able to navigate to a new day by clicking on a date in the datepicker, but also to be able to drag appointments over the datepicker and drop them on a specific date. I have that working now. I also want users, while they are dragging an appointment, to be able to move to next or previous month simply by hovering over the datepicker. So I've added a mouseenter and mouseleave event: one runs a setInterval function which sends a click every 1,5 seconds; the other cancels the interval function. This is where all sorts of things go wrong. As soon as one click has been triggered, the mouseleave function no longer works: in other words, the datepicker keeps flipping over to another month every 1,5 seconds. It seems that the datePicker interferes, or that the click event causes other things to go wrong. What can I do?

    Read the article

  • question about counting sort

    - by davit-datuashvili
    hi i have write following code which prints elements in sorted order only one big problem is that it use two additional array here is my code public class occurance{ public static final int n=5; public static void main(String[]args){ // n is maximum possible value what it should be in array suppose n=5 then array may be int a[]=new int[]{3,4,4,2,1,3,5};// as u see all elements are less or equal to n //create array a.length*n int b[]=new int[a.length*n]; int c[]=new int[b.length]; for (int i=0;i<b.length;i++){ b[i]=0; c[i]=0; } for (int i=0;i<a.length;i++){ if (b[a[i]]==1){ c[a[i]]=1; } else{ b[a[i]]=1; } } for (int i=0;i<b.length;i++){ if (b[i]==1) { System.out.println(i); } if (c[i]==1){ System.out.println(i); } } } } // 1 2 3 3 4 4 5 1.i have two question what is complexity of this algorithm?i mean running time 2. how put this elements into other array with sorted order? thanks

    Read the article

  • How to delete duplicate/aggregate rows faster in a file using Java (no DB)

    - by S. Singh
    I have a 2GB big text file, it has 5 columns delimited by tab. A row will be called duplicate only if 4 out of 5 columns matches. Right now, I am doing dduping by first loading each coloumn in separate List , then iterating through lists, deleting the duplicate rows as it encountered and aggregating. The problem: it is taking more than 20 hours to process one file. I have 25 such files to process. Can anyone please share their experience, how they would go about doing such dduping? This dduping will be a throw away code. So, I was looking for some quick/dirty solution, to get job done as soon as possible. Here is my pseudo code (roughly) Iterate over the rows i=current_row_no. Iterate over the row no. i+1 to last_row if(col1 matches //find duplicate && col2 matches && col3 matches && col4 matches) { col5List.set(i,get col5); //aggregate } Duplicate example A and B will be duplicate A=(1,1,1,1,1), B=(1,1,1,1,2), C=(2,1,1,1,1) and output would be A=(1,1,1,1,1+2) C=(2,1,1,1,1) [notice that B has been kicked out]

    Read the article

  • Is there a general-purpose printf-ish routine defined in any C standard

    - by supercat
    In many C libraries, there is a printf-style routine which is something like the following: int __vgprintf(void *info, (void)(*print_function(void*, char)), const char *format, va_list params); which will format the supplied string and call print_function with the passed-in info value and each character in sequence. A function like fprintf will pass __vgprintf the passed-in file parameter and a pointer to a function which will cast its void* to a FILE* and output the passed-in character to that file. A function like snprintf will create a struct holding a char* and length, and pass the address of that struct to a function which will output each character in sequence, space permitting. Is there any standard for such a function, which could be used if e.g. one wanted a function to output an arbitrary format to a TCP port? A common approach is to allocate a buffer one hopes is big enough, use snprintf to put the data there, and then output the data from the buffer. It would seem cleaner, though, if there were a standard way to to specify that the print formatter should call a user-supplied routine with each character.

    Read the article

  • WCF: Exposed Object Model - stuck in a loop

    - by Mark
    Hi I'm working on a pretty big WSSF project. I have a normal object model in the business layer. Eg a customer has an orders collection property, when this is accessed then it loads from the data layer (lazy loading). An order has a productCollection property etc etc.. Now the bit I'm finding tricky is exposing this via WCF. I want to export a collection of orders. The client app will also need information about the customers. Using the WSSF data contract designer I have set it up so that customers have a property called "order collection". This is fine if you have a customer object and would like to look at the orders but if you have an order object there is no customer property so it doesn't work going up the hierarchy. I've tried adding a customer property to the orders object but then the code gets stuck in a loop when it loads the data contracts up. This is because it doesn't load on demand like in the business layer. I need to load all properties up before the objects can be sent out via WCF. It ends up loading an order, then the customer for that order, then the orders for that customer, then the customer for that order etc etc... I'm sure I've got all this wrong. Help!!

    Read the article

  • Export large amount of data from Oracle 10G to SQL Server 2005

    - by uniball
    Dear all, I need to export 100 million data rows (avg row length ~ 100 bytes) from Oracle 10G database table into SQL server (over WAN/VLAN with 6MBits/sec capacity) on a regular basis. So far, these are the options that I have tried and a quick summary. Has anyone tried this before? Are there other better options? Which option would be the best in terms of performance and reliability? The time taken has been calculated using tests on smaller amounts of data and then extrapolating it to estimate the time required. Using data import wizard on the SQL server or SSIS packages to import the data. It will take around 150 hours to complete the task. Using Oracle batch job to spool data into a comma-delimited flat-file. Then using SSIS package to FTP this file to the SQL server and then load directly from the flat-file. The issue here is the size of the flat-file which is expected to run in GBs. Although this option is drastically different, I am even considering the option of using Linked Server to query the Oracle data directly at run-time to avoid bringing in data. Performance is a big problem and I have limited control over the Oracle database in terms of creating table indexes. Regards, Uniball

    Read the article

  • How to get at JSON in grails 2.0

    - by Mikey
    I am sending myself JSON like so with jQuery: $.ajax ({ type: "POST", url: 'http://localhost:8080/myproject/myController/myAction', dataType: 'json', async: false, //json object to sent to the authentication url data: {"stuff":"yes", "listThing":[1,2,3], "listObjects":[{"one":"thing"},{"two":"thing2"}]}, success: function () { alert("Thanks!"); } }) I send this to a controller and do println params And I know I'm already in trouble... [stuff:yes, listObjects[1][two]:thing2, listObjects[0][one]:thing, listThing[]:[1, 2, 3], action:myAction, controller:myController] I cannot figure out how to get at most of these values... I can get "yes" with params.stuff, but I cant do params.listThing.each{} or params.listObjects.each{} What am I doing wrong? UPDATE: I make the controller do this to try the two suggestions so far: println params println params.stuff println params.list('listObjects') println params.listThing def thisWontWork = JSON.parse(params.listThing) render("omg l2json") look how weird the parameters look at the end of the null pointer exception when I try the answers: [stuff:yes, listObjects[1][two]:thing2, listObjects[0][one]:thing, listThing[]:[1, 2, 3], action:l2json, controller:rateAPI] yes [] null | Error 2012-03-25 22:16:13,950 ["http-bio-8080"-exec-7] ERROR errors.GrailsExceptionResolver - NullPointerException occurred when processing request: [POST] /myproject/myController/myAction - parameters: stuff: yes listObjects[1][two]: thing2 listObjects[0][one]: thing listThing[]: 1 listThing[]: 2 listThing[]: 3 UPDATE 2 I am learning things, but this can't be right: println params['listThing[]'] println params['listObjects[0][one]'] prints [1, 2, 3] thing It seems like this is some part of grails new JSON marshaling. This is somewhat inconvenient for my purposes of hacking around with the values. How would I get all these individual params back into a big groovy object of nested maps and lists? Maybe I am not doing what I want with jQuery?

    Read the article

  • How to reliably identify users across Internet?

    - by amn
    I know this is a big one. In fact, it may be used for some SO community wiki. Anyways, I am running a website that DOES NOT use explicit authentication of users. It's public as in open to everybody. However, due to the nature of the service, some users need to be locked out due to misbehavior. I am currently blocking IP addresses, but I am aware of the supposed fact that many people purposefully reset their DHCP client cache to have their ISP assign them new addresses. Is that a fact? I think it certainly is a lucrative possibility for some people who want to circumvent being denied access. So IPs turn out to be a suboptimal way of dealing with this. But there is nothing else, is it? MAC addresses don't survive on WAN (change from hop to hop?), and even if they did - these can also be spoofed, although I think less easily than IP renewal. Cookies and even Flash cookies are out of the question, because there are tons of "tutorials" how to wipe these, and those intent on wreaking havoc on Internet are well aware and well equipped against such rudimentary measures I would employ. Is there anything else to lean on? I was thinking heuristical profiling - collecting available data from client-side and forming some key with it, but have not gone as far as to implementing it - is it an option?

    Read the article

  • How do you work around memcached's key/value limitations?

    - by mjy
    Memcached has length limitations for keys (250?) and values (roughtly 1MB), as well as some (to my knowledge) not very well defined character restrictions for keys. What is the best way to work around those in your opinion? I use the Perl API Cache::Memcached. What I do currently is store a special string for the main key's value if the original value was too big ("parts:<number") and in that case, I store <number parts with keys named 1+<main key, 2+<main key etc.. This seems "OK" (but messy) for some cases, not so good for others and it has the intrinsic problem that some of the parts might be missing at any time (so space is wasted for keeping the others and time is wasted reading them). As for the key limitations, one could probably implement hashing and store the full key (to work around collisions) in the value, but I haven't needed to do this yet. Has anyone come up with a more elegant way, or even a Perl API that handles arbitrary data sizes (and key values) transparently? Has anyone hacked the memcached server to support arbitrary keys/values?

    Read the article

  • How To perform a SQL Query to DataTable Operation That Can Be Cancelled

    - by David W
    I tried to make the title as specific as possible. Basically what I have running inside a backgroundworker thread now is some code that looks like: SqlConnection conn = new SqlConnection(connstring); SqlCommand cmd = new SqlCommand(query, conn); conn.Open(); SqlDataAdapter sda = new SqlDataAdapter(cmd); sda.Fill(Results); conn.Close(); sda.Dispose(); Where query is a string representing a large, time consuming query, and conn is the connection object. My problem now is I need a stop button. I've come to realize killing the backgroundworker would be worthless because I still want to keep what results are left over after the query is canceled. Plus it wouldn't be able to check the canceled state until after the query. What I've come up with so far: I've been trying to conceptualize how to handle this efficiently without taking too big of a performance hit. My idea was to use a SqlDataReader to read the data from the query piece at a time so that I had a "loop" to check a flag I could set from the GUI via a button. The problem is as far as I know I can't use the Load() method of a datatable and still be able to cancel the sqlcommand. If I'm wrong please let me know because that would make cancelling slightly easier. In light of what I discovered I came to the realization I may only be able to cancel the sqlcommand mid-query if I did something like the below (pseudo-code): while(reader.Read()) { //check flag status //if it is set to 'kill' fire off the kill thread //otherwise populate the datatable with what was read } However, it would seem to me this would be highly ineffective and possibly costly. Is this the only way to kill a sqlcommand in progress that absolutely needs to be in a datatable? Any help would be appreciated!

    Read the article

  • Is there a IDE/compiler PC benchmark I can use to compare my PCs performance?

    - by RickL
    I'm looking for a benchmark (and results on other PCs) which would give me an idea of the development performance gain I could get by upgrading my PC, also the benchmark could be used to justify the upgrade to my boss. I use Visual Studio 2008 for my development, so I'd like to get an idea of by what factor the build times would be improved, and also it would be good if the benchmark could incorporate IDE performance (i.e. when editing, using intellisense, opening code files etc) into its result. I currently have an AMD 3800x2, with 2GB RAM on Vista 32. For example, I'd like to know what kind of performance gain I'd see in Visual Studio 2008 with a Q6600, 4GB RAM on Vista 64. And also with other processors, and other RAM sizes... also see whether hard disk performance is a big factor. EDIT: I mentioned Vista 64 because I'm aware that Vista 32 can only use 3GB RAM maximum. So I'd presume that wanting to use more RAM would require Vista 64, but perhaps it could still be slower overall there is a large overhead in using the 32 bit VS 2008 on 64 bit OS.

    Read the article

  • Save Xml in an Excel cell value causes ComException

    - by mas_oz2k1
    I am trying to save an object (Class1) as string in a cell value. My issue is that from time to time I have a ComException: HRESULT: 0x8007000E (E_OUTOFMEMORY) (It is kind of random but I have not identified any particular pattern yet) when I write the value into a cell. Any ideas will be welcome For illustration purposes: Let Class1 be the class to be converted to an Xml string. (Notice that I removed the xml declaration at the start of the string to avoid having the preamble present- non printable character) <Class1 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" <ElementID HL690375</ElementID </Class1" Class1 myClass = new Class1(); this class is converted to a string s. s= ConvertObjectToXmlString(myClass); then s is assigned to a cell Range r = Application.ActiveCell; r.Value2 = s; Note: (1) If the string is too big, I limit it to 32000 chars and split the string in chunks of 32000 chars and save the chunks in multiple cells. (2) I do not to quote the string before adding to a cell. Do I need to? If so how it can be done? (3) All object contents are English. (4) C# code sample will be great but VB.net code is OK.

    Read the article

  • best REGEXP friendly Text Editors + most powerful REGEXP syntax?

    - by John
    I am fluent with Microsoft Visual 2005 regular expressions and they are a big time saver. I seem to learn them best by having a vaguely organized cheat sheet thrown at me, at which point I read just a little and play with them until I understand what's going on. That learning approach has worked well for me, for now. I would really like to take this to the next level though. Basically -- What is the REGEXP convention that is generally regarded as the most open-ended and powerful? VS2005 Regexps seem kind of gimped, so maybe I'm a kid playing in a sandbox. Are there text editors out there that can perform a highlight all matches, list lines containing string, or some kind of powerful function like that in conjunction with the very strongest REGEXP language? If not I can just use multiple programs and a weird technique but I'd like to avoid that. I wonder if a stronger REGEXP language or a "stronger" regEXP writer might be able to have his search match all results on all lines even by clicking a "find next" by adding some simple criteria to the search. Anyway, please provide advice!

    Read the article

  • How to use named_scope to incrementally construct a complex query?

    - by wbharding
    I want to use Rails named_scope to incrementally build complex queries that are returned from external methods. I believe I can get the behavior I want with anonymous scopes like so: def filter_turkeys Farm.scoped(:conditions => {:animal => 'turkey'}) end def filter_tasty Farm.scoped(:conditions => {:tasty => true}) end turkeys = filter_turkeys is_tasty = filter_tasty tasty_turkeys = filter_turkeys.filter_tasty But say that I already have a named_scope that does what these anonymous scopes do, can I just use that, rather than having to declare anonymous scopes? Obviously one solution would be to pass my growing query to each of the filter methods, a la def filter_turkey(existing_query) existing_query.turkey # turkey is a named_scoped that filters for turkey end But that feels like an un-DRY way to solve the problem. Oh, and if you're wondering why I would want to return named_scope pieces from methods, rather than just build all the named scopes I need and concatenate them together, it's because my queries are too complex to be nicely handled with named_scopes themselves, and the queries get used in multiple places, so I don't want to manually build the same big hairy concatenation multiple times.

    Read the article

  • Performance improvement of client server system

    - by Tanuj
    I have a legacy client server system where the server maintains a record of some data stored in a sqlite database. The data is related to monitoring access patterns of files stored on the server. The client application is basically a remote viewer of the data. When the client is launched, it connects to the server and gets the data from the server to display in a grid view. The data gets updated in real time on the server and the view in the client automatically gets refreshed. There are two problems with the current implementation: When the database gets too big, it takes a lot of time to load the client. What are the best ways to deal with this. One option is to maintain a cache at the client side. How to best implement a cache ? How can the server maintain a diff so that it only sends the diff during the refresh cycle. There can be multiple clients and each client needs to display the latest data available on the server. The server is a windows service daemon. Both the client and the server are implemented in C#

    Read the article

  • Fastest XML parser for small, simple documents in Java

    - by Varkhan
    I have to objectify very simple and small XML documents (less than 1k, and it's almost SGML: no namespaces, plain UTF-8, you name it...), read from a stream, in Java. I am using JAXP to process the data from my stream into a Document object. I have tried Xerces, it's way too big and slow... I am using Dom4j, but I am still spending way too much time in org.dom4j.io.SAXReader. Does anybody out there have any suggestion on a faster, more efficient implementation, keeping in mind I have very tough CPU and memory constraints? [Edit 1] Keep in mind that my documents are very small, so the overhead of staring the parser can be important. For instance I am spending as much time in org.xml.sax.helpers.XMLReaderFactory.createXMLReader as in org.dom4j.io.SAXReader.read [Edit 2] The result has to be in Dom format, as I pass the document to decision tools that do arbitrary processing on it, like switching code based on the value of arbitrary XPaths, but also extracting lists of values packed as children of a predefined node. [Edit 3] In any case I eventually need to load/parse the complete document, since all the information it contains is going to be used at some point. (This question is related to, but different from, http://stackoverflow.com/questions/373833/best-xml-parser-for-java )

    Read the article

  • db4o problem with graph of objects

    - by Staszek28
    I am a new to db4o. I have a big problem with persistance of a graph of objects. I am trying to migrate from old persistance component to new, using db4o. Before I peristed all objects its graph looked like below (Take a look at Zrodlo.Metadane.abstrakt string field with focused value) [its view from eclipse debuger] with a code: ObjectContainer db=Db4o.openFile(DB_FILE); try { db.store(encja); db.commit(); } finally{ db.close(); } After that, I tried to read it with a code: ObjectContainer db=Db4o.openFile((DB_FILE)); try{ Query q = db.query(); q.constrain(EncjaDanych.class); ObjectSet<Object> objectSet = q.execute(); logger.debug("objectSet.size" + objectSet.size()); EncjaDanych encja = (EncjaDanych) objectSet.get(0); logger.debug("ENCJA" + encja.toString()); return encja; }finally{ db.close(); } and I got it (picture below) - string field "abstrakt" is null now !!! I take a look at it using ObjectManager (picture below) and abstrakt field has not-null value there!!! The same value, that on the 1st picture. Please help me :) It is my second day with db4o. Thanks in advance! I am attaching some code with structure of persisted class: public class EncjaDanych{ Map mapaIdRepo = new HashMap(); public Map mapaNazwaRepo = new HashMap(); }

    Read the article

  • Is there a programmatic way to transform a sequence of image files into a PDF?

    - by Salim Fadhley
    I have a sequence of JPG images. Each of the scans is already cropped to the exact size of one page. They are sequential pages of a valuable and out of print book. The publishing application requires that these pages be submitted as a single PDF file. I could take each of these images and just past them into a word-processor (e.g. OpenOffice) - unfortunately the problem here is that it's a very big book and I've got quite a few of these books to get through. It would obviously be time-consuming. This is volunteer work! My second idea was to use LaTeX - I could make a very simple document that consists of nothing more than a series of in-line image includes. I'm sure that this approach could be made to work, it's just a little on the complex side for something which seems like a very simple job. It occurred to me that there must be a simpler way - so any suggestions? I'm on Ubuntu 9.10, my primary programming language is Python, but if the solution is super-simple I'd happily adopt any technology that works.

    Read the article

  • How to detect .NET WPF memory leak or GC long run?

    - by Néstor Sánchez A.
    I have the next very strange situation and problem: .NET 4.0 application for diagram editing (WPF). Runs ok in my PC: 8GM RAM, 3.0GHz, i7 quad-core. While creating objects (mostly diagram nodes and connectors, plus all the undo/redo information) the TaskManager show, as expected, some memory usage "jumps" (up and down). These mem-usage "jumps" also remains executing AFTER user interaction ended. Maybe this is the GC cleaning/regorganizing memory? To see what is going on, I've used the Ants mem profiler, but somewhat it prevents those "jumps" to happen after user interaction. PROBLEM: It Freezes/Hangs after seconds or minutes of usage in some slow/weak laptos/netbooks of my beta testers (under 2GHz of speed and under 2GB of RAM). I was thinking of a memory leak, but... EDIT: Also, there is the case that the memory usage grows and grows until collapse (only in slow machines). In a Windows XP Mode machine (VM in Win 7) with only 512MB of RAM Assigned it works fine without mem-usage "jumps" after user interaction (no GC cleaning?!). So, I really have a big trouble because I cannot reproduce the error, only see these strange behaviour (mem jumps), and the tool supposed to show me what is happening is hiding the problem (like the "observer's paradox"). Any ideas on what's happening and how to solve it?

    Read the article

  • How to separate sets of numbers onto separate lines

    - by Fred
    About the script: The script below will create 300 sets of random characters. What is presently happening, is that it creates them but shows them all on one line, in one big chunk. With all the searching and testing I've done to try and achieve this, I have had no success. I would like to know which code and where to put it, so that each SET (300) of 15 characters long, will show and be saved to file. Here is my script: <?php function GetID($x){ $characters = array_merge(range('A','Z'),range('a','z'),range(2,9)); shuffle($characters); for($x=0;$x<=299;$x++){ } for (; strlen($ReqID)<$x;){ $ReqID .= $characters[mt_rand(0, count($characters))]; } return $ReqID; } $ReqID .= GetID(5); $ReqID .= "-"; $ReqID .= GetID(5); $ReqID .= "-"; $ReqID .= GetID(5); echo $ReqID; $fh = fopen("file.txt","a+"); fwrite($fh, ("$ReqID")."\n"); fclose($fh); ?>

    Read the article

  • Advanced search engine or server for relational database [closed]

    - by Pawel
    In my current project we are storing big volume of data in relational database. One of the recent key requirements is to enrich application by adding some advanced search capabilities. In the Project, performance is one of the important factors due to very large tables (10+ milions of records) with parent-children relations (for example: multi-level parent-child relationship, where I am looking for all parents with specific children). The search engine should also be able to check these references for hits. I have found some potential engines on stack overflow, however it looks like that all of them are dedicated rather for text search than relational db and hosted on linux os: lucene Solr Sphinx As I understand some of them use documents as a source of searching, but is it possible or efficient to create programmaticaly documents based on my relational data? As I am not familiar with all of their features/capabilities can anyone please make some recommendations or propose some different solution? To summarize my requirements: framework/engine to search relational database including decendants. support for Microsoft SQL Server can be used in .NET applications preferably hosted on Windows systems Does any of mentioned above are able to solve my problem? do you know any better solution?

    Read the article

  • Running code/script as a result of a form submission in ASP.NET

    - by firmbeliever
    An outside vendor did some html work for us, and I'm filling in the actual functionality. I have an issue that I need help with. He created a simple html page that is opened as a modal pop-up. It contains a form with a few input fields and a submit button. On submitting, an email should be sent using info from the input fields. I turned his simple html page into a simple aspx page, added runat=server to the form, and added the c# code inside script tags to create and send the email. It technically works but has a big issue. After the information is submitted and the email is sent, the page (which is supposed to just be a modal pop-up type thing) gets reloaded, but it is now no longer a pop-up. It's reloaded as a standalone page. So I'm trying to find out if there is a way to get the form to just execute those few lines of c# code on submission without reloading the form. I'm somewhat aware of cgi scripts, but from what I've read, that can be buggy with IIS and all. Plus I'd like to think I could get these few lines of code to run without creating a separate executable. Any help is greatly appreciated.

    Read the article

  • redirecting the root domain - SEO and other issues, need some guidance!

    - by Jim Sp
    I'm not familiar with some of these forwarding methods and I need help. My issue is this: I have a site hosted on discountasp.net. My domain was registered through 1&1 and I redirected the DNS to what discountasp.net wanted. So when a user types www.mydomain.com, he/she sees the ASP.NET site hosted on discountasp.net, which is all fine My main page is Index.aspx, I really suck at html page design and I don't have time or the talent to fiddle with it (or money to get it done by a pro). The rest of the pages are fine. I want to use a good theme from tumblr or bloggr - one of the blog sites and create a page that I want to use as the first page - directly on blogger or tumblr - say yyy.blogspot.com (I have many reasons, so for now please don't bash my decision - let's just say that's what I want). That means when a user types www.mydomain.com, it should redirect it to the blogger or tumblr page. Everything else stays the sme - the links on the blogger page will say www.mydomain.com/xxxx and show up what's on the hosted website. I have setup the IIS rewrite rules etc. etc. so that all works just fine The bottom line is I want to show an external site's web page as my root page. I suppose I'm struggling to even explain what I want! I can of course do a response.redirect on the Index.aspx page - which is the simplest way to manage this, but the big question is will this hurt SEO in some way? If not, that would be what I do and leave the rest of the infrastructure intact (I have already done this to test and it works fine) Thank you very much j

    Read the article

< Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >