Search Results

Search found 22569 results on 903 pages for 'win32 process'.

Page 795/903 | < Previous Page | 791 792 793 794 795 796 797 798 799 800 801 802  | Next Page >

  • problem in case of window service

    - by prateeksaluja20
    Hello friends, i made a windows service & add project installer.in which only contain this code. System.Diagnostics.Process.Start(@"C:\Windows\system32\notepad.exe"); inside the timer tick event & interval is 60 sec.i just wanted to try to run Windows service. 1st-serviceProcessInstaller1 i have been changed its account setting as local system. 2nd-serviceInstaller1 in this case i have been changed its start up type as Automatic. then i create a setup add another project then right click add project output then add primary output then press ok. then go to Right click on project-view-custom Action-right click on Install-Add custom Action-select Application folder & add primary output.the same thing done for all the remaining options like commit,rollback,uninstall. after that i build the setup it build succesfully then i install the setup it installed properly into program file n create one .exe file n one Instalfile. but problem is that when i search the service into "services.msc" the service is not there. means service is not showing there.i tried but not getting the ans.plz help me to solve this problem.

    Read the article

  • Performance of inter-database query (between linked servers)

    - by Swoosh
    I have an import between 2 linked servers. I basically got to get the data from a multiple join into a table on my side. The current query is something like this: select a.* from db1.dbo.tbl1 a inner join db1.dbo.tbl2 on ... inner join db1.dbo.tbl3 on ... inner join db1.dbo.tbl4 on ... inner join db2.dbo.myside on ... db1 = linked server db2 = my own database After this one, I am using an insert into + select to add this data in my table which is located in db2. (usually few hundred records - this import running once a minute) My question is related to performance. The tables on the linked server (tbl1, tbl2, tbl3, tbl4) are huge tables, with millions of records, and it is slowing down the import process. I was told that, if I do the join on the "other" side (db1 - linked server) for example in a stored procedure, than, even if the query looks the same, it would run faster. Is that right? This is kinda hard to test. Note that the join contains a table from my database too. Also. are there other "tricks" I could use in order to make this run faster? Thanks

    Read the article

  • Map inheritance from generic class in Linq To SQL

    - by Ksenia Mukhortova
    Hi everyone, I'm trying to map my inheritance hierarchy to DB using Linq to SQL: Inheritance is like this, classes are POCO, without any LINQ to SQL attributes: public interface IStage { ... } public abstract class SimpleStage<T> : IStage where T : Process { ... } public class ConcreteStage : SimpleStage<ConcreteProcess> { ... } Here is the mapping: <Database Name="NNN" xmlns="http://schemas.microsoft.com/linqtosql/mapping/2007"> <Table Name="dbo.Stage" Member="Stage"> <Type Name="BusinessLogic.Domain.IStage"> <Column Name="ID" Member="ID" DbType="Int NOT NULL IDENTITY" IsPrimaryKey="true" IsDbGenerated="true" AutoSync="OnInsert" /> <Column Name="StageType" Member="StageType" IsDiscriminator="true" /> <Type Name="BusinessLogic.Domain.SimpleStage" IsInheritanceDefault="true"> <Type Name="BusinessLogic.Domain.ConcreteStage" IsInheritanceDefault="true" InheritanceCode="1"/> </Type> </Type> </Table> </Database> In the runtime I get error: System.InvalidOperationException was unhandled Message="Mapping Problem: Cannot find runtime type for type mapping 'BusinessLogic.Domain.SimpleStage'." Neither specifying SimpleStage, nor SimpleStage<T> in mapping file helps - runtime keeps producing different types of errors. DC is created like this: StreamReader sr = new StreamReader(@"MappingFile.map"); XmlMappingSource mapping = XmlMappingSource.FromStream(sr.BaseStream); DataContext dc = new DataContext(@"connection string", mapping); If Linq to SQL doesn't support this, could you, please, advise some other ORM, which does. Thanks in advance, Regards! Ksenia

    Read the article

  • How to create a progress bar while downloading a file using the windows API?

    - by Jorge Chayan
    i'm working on an application in MS Visual C++ using Windows API that must download a file and place it in a folder. I have already implemented the download using URLDownloadToFile function, but i want to create a PROGRESS_CLASS progress bar with marquee style while the file is being downloaded, but it doesn't seems to get animated in the process. This is the function I use for downloading: BOOL SOXDownload() { HRESULT hRez = URLDownloadToFile(NULL, "url","C:\\sox.zip", 0, NULL); if (hRez == E_OUTOFMEMORY ) { MessageBox(hWnd, "Out of memory Error","", MB_OK); return FALSE; } if (hRez != S_OK) { MessageBox(hWnd, "Error downloading sox.", "Error!", MB_ICONERROR | MB_SYSTEMMODAL); return FALSE; } if (hRez == S_OK) { BSTR file = SysAllocString(L"C:\\sox.zip"); BSTR folder = SysAllocString(L"C:\\"); Unzip2Folder(file, folder); ::MessageBoxA(hWnd, "Sox Binaries downloaded succesfully", "Success", MB_OK); } return TRUE; } Later I call inside WM_CREATE (in my main window's message processor): if (!fileExists("C:\\SOX\\SOX.exe")) { components[7] = CreateWindowEx(0, PROGRESS_CLASS, NULL, WS_VISIBLE | PBS_MARQUEE, GetSystemMetrics(SM_CXSCREEN) / 2 - 80, GetSystemMetrics(SM_CYSCREEN) / 2 + 25, 200, 50, hWnd, NULL, NULL, NULL); SetWindowText(components[7], "Downloading SoX"); SendMessage(components[7], PBM_SETRANGE, 0, (LPARAM) MAKELPARAM(0, 50)); SendMessage(components[7], PBM_SETMARQUEE, TRUE, MAKELPARAM( 0, 50)); SOXDownload(); SendMessage(components[7], WM_CLOSE, NULL, NULL); } And as I want, I get a tiny progress bar... But it's not animated, and when I place the cursor over the bar, the cursor indicates that the program is busy downloading the file. When the download is complete, the window closes as i requested: SendMessage(components[7], WM_CLOSE, NULL, NULL); So the question is how can I make the bar move while downloading the file? Considering that i want it done with marquee style for simplicity. Thanks in advance.

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • Best way to update/insert into a table based on a remote table.

    - by martilyo
    I have two very large enterprise tables in an Oracle 10g database. One table keeps the historical information of the other table. The problem is, I'm getting to the point where the records are just too many that my insert update is taking too long and my session is getting killed by the governor. Here's a pseudocode of my update process: sqlsel := 'SELECT col1, col2, col3, sysdate FROM table2@remote_location dpi WHERE (col1, col2, col3) IN ( SELECT col1, col2, col3 FROM table2@remote_location MINUS SELECT DISTINCT col1, col2, col3 FROM table1 mpc WHERE facility = '''||load_facility||''' )'; EXECUTE IMMEDIATE sqlsel BULK COLLECT INTO table1; I've tried the MERGE statement: MERGE INTO table1 t1 USING ( SELECT col1, col2, col3 FROM table2@remote_location ) t2 ON ( t1.col1 = t2.col1 AND t1.col2 = t2.col2 AND t1.col3 = t2.col3 ) WHEN NOT MATCHED THEN INSERT (t1.col1, t1.col2, t1.col3, t1.update_dttm ) VALUES (t2.col1, t2.col2, t2.col3, sysdate ) But there seems to be a confirmed bug on versions prior to Oracle 10.2.0.4 on the merge statement when doing a merge using a remote database. The chance of getting an enterprise upgrade is slim so is there a way to further optimize my first query or write it in another way to have it run best performance wise? Thanks.

    Read the article

  • odd url, and difficulty in following the php page flow

    - by sdfor
    I'm trying to understand code that I bought so I can modify it. In the index.php there are picture links: <a href="test10,10"><img title="" border=1 src="makethumb.php?pic=product_images/test101.jpg&amp;w=121&amp;sq=N" / ></a> I don't understand the href since it is not pointing to a page. test10 is an id of a picture. I assumed it was going back to the index.php and the code would extract the test10,10 from the url, but it's not. I know that because I put in trace code as the first line. The question is, where is the link going to? I know it that it somewhere in the process it executes a page called profile.php, but nowhere in the source code (doing a global search) is there an explicit call to profile.php. As a related question, is there a way to profile the code to see what pages it's calling without using xdebug, which for the life of me I can't get working after many hours of trying every suggestion I found here and else where. (I'm using xampp) thanks

    Read the article

  • Inline form editing on client side

    - by bykasif
    I see some web sites use dynamic forms(I am not sure about how to call them!) to edit a group of data. For example: there is a group of data such as name, last name, city, country.etc. when user clicks on EDIT button, instead of doing postback, a form, consisisting of 2 textboxes + 2 comboboxes, dynamically opens to edit,And then when you click on Save button, edit form disappears, and all data updates.. Now, I know what happens over here is using Ajax for server calls and some javascript for dom manipulation.. I even found some jquery plugins for textbox editing.. However, I could not found anything for full implementation of form fields. Therefore I have implemented it on asp.net by jquery ajax calls and dom manipulation manually. here is my process: 1) when Edit button clicked: Make a ajax call to server to retrieve necessary formedit.aspx 2) it returns editable form fields with values assigned. 3) when Save button clicked: make ajax call to server to retrieve formupdateprocess.aspx page. it basically do the database updates and then return necessary DOM snipplet (...) to insert current page.. well it works but MY PROBLEM, is performance.. Result seems slower than samples I see in other sites.:(( IS there anything that I dont know? a better way to implement this??

    Read the article

  • Should nested attributes be automatically deleted when I delete the parent record?

    - by brad
    I'm playing around with nested forms in attributes and have a model Invoice that has_many invoice_phone_numbers. I have the following line in my invoice.rb model file accepts_nested_attributes_for :invoice_phone_numbers, :allow_destroy => true, :reject_if => proc { |attrs| attrs.all? { |k, v| v.blank? } } This does what it should and I can delete invoice_phone_numbers from the form by selecting their 'delete' checkbox. But when I delete an Invoice, I have noticed that the nested invoice_phone_numbers are not also deleted. This causes problems as rails seems to reuse id numbers in the Invoice model (Should it? Does this depend on the database? I'm using SQLite3) so phone numbers from previous invoices turn up in new invoices after they have been created. Anyway, my question is should the nested attributes be deleted when I delete the parent attribute? Is there a way to make this happen automatically as part of the nesting process or do I need to deal with this in my invoice model? If so, what is the best way to do this? I would try to go about this with a before_destroy callback but want to know if this is the best way to do this. Anyway, thanks.

    Read the article

  • Booking logic and architecture, database sync: Hotels, tennis courts reservation system ...

    - by coulix
    Hello Stackers, Imagine that you want to design a tennis booking system. You have 5 tennis clubs as partenrs with no online api allowing you to check on their side if a court is booked or not: You have to build this part as well. Every time a booking is done on their side you want it to be know by our system. Probably using a POST request form tennis partner to our server. Every time a booking is done on our website, we want to push the booking to their system. The difficulty is that their system need to be online and accessible from outside. Ip may change, we have to use a dns updater. In case their system is not available we still accept the booking and fallback to an async email with 'i confirm booking/reject booking' link sent to the club. I find the whole process quite complex and was wondering about the way online hotel booking system and hotel were working. Do they all have their data open and online ? The good thing is that the data will grow large and fits nicely to some no SQL ;) like couch db

    Read the article

  • How to "End Task" not "Kill" or "Terminate"?

    - by Luiscencio
    Hi community. I have a 3G card to provide internet to a remote computer... I have to run a program(provided with the card) to establish the connection... since connections suddenly is lost I wrote a script that Kills the program and reopens it so that the connection is reestablished, there are certain versions of this program that don't kill the connection when killed/terminated, just when closed properly. so I am looking for a script or program that "Properly Closes" a window so I can close it and reopen it in case the connection is lost. this is the code that kills the program Option Explicit Dim objWMIService, objProcess, colProcess Dim strComputer, strProcessKill strComputer = "." strProcessKill = "'Telcel3G.exe'" Set objWMIService = GetObject("winmgmts:" _ & "{impersonationLevel=impersonate}!\\" _ & strComputer & "\root\cimv2") Set colProcess = objWMIService.ExecQuery _ ("Select * from Win32_Process Where Name = " & strProcessKill ) For Each objProcess in colProcess objProcess.Terminate() Next WSCript.Echo "Just killed process " & strProcessKill _ & " on " & strComputer WScript.Quit

    Read the article

  • Rails: How do I validate against this code that I put into the lib/ directory?

    - by randombits
    Having a bit of difficulty finding out the proper way to mix in code that I put into the lib/ directory for Rails 2.3.5. I have several models that require phone validation. I had at least three models that used the same code, so I wanted to keep things DRY and moved it out to the lib/ directory. I used to have code like this in each model: validate :phone_is_valid Then I'd have a phone_is_valid method in the model: protected def phone_is_valid # process a bunch of logic errors.add_to_base("invalid phone") if validation failed end I moved this code out into lib/phones/ and in lib/phones I have lib/phones/phone_validation.rb, and in there I copy pasted the phone_is_valid method. My question is, how do I mix this into all of my models now? And does my validate :phone_is_valid method remain the same or does that change? I want to make sure that the errors.add_to_base method continues to function as it did before while keeping everything DRY. I also created another file in lib/phones/ called lib/phones/phone_normalize.rb. Again, many models need the value input by the user to be normalized. Meaning turn (555) 222-1212 to 5552221212 or something similar. Can I invoke that simply by invoking Phones::Phone_Normalize::normalize_method(number)? I suppose I'm confused on the following: How to use the lib directory for validation How to use the lib directory for commonly shared methods that return values

    Read the article

  • django multiprocess problem

    - by iKiR
    I have django application, running under lighttpd via fastcgi. FCGI running script looks like: python manage.py runfcgi socket=<path>/main.socket method=prefork \ pidfile=<path>/server.pid \ minspare=5 maxspare=10 maxchildren=10 maxrequests=500 \ I use SQLite. So I have 10 proccess, which all work with the same DB. Next I have 2 views: def view1(request) ... obj = MyModel.objects.get_or_create(id=1) obj.param1 = <some value> obj.save () def view2(request) ... obj = MyModel.objects.get_or_create(id=1) obj.param2 = <some value> obj.save () And If this views are executed in two different threads sometimes I get MyModel instance in DB with id=1 and updated either param1 or param2 (BUT not both) - it depends on which process was the first. (of course in real life id changes, but sometimes 2 processes execute these two views with same id) The question is: What should I do to get instance with updated param1 and param2? I need something for merging changes in different processes. One decision is create interprocess lock object but in this case I will get sequence executing views and they will not be able to be executed simultaneously, so I ask help

    Read the article

  • In Scrum, should a team remove points from (defect) stories that don't result in a code change?

    - by CanIgtAW00tW00t
    My work uses a Scrum-like process to manage projects. I say Scrum-like, because we call it Scrum, but our project managers exclude aspects of Scrum that are inconvenient (most notably customer interaction). One of the stories in our current sprint was to correct a defect. After spending almost an entire day working on the issue, I determined the issue was the result of a permissions issue, so I didn't end up modifying any code. Our Scrum master / project manager decided that no code change equals zero points. I know that Scrum points are supposed to measure size / complexity and not time, but our Scrum master invests a lot of time in preparing graphs and statistical information from past sprints (average velocity, average points completed, etc.) I've always been of the opinion that for statistics to be meaningful in any way, the data must be as accurate as possible. All of our data is fuzzy to begin with, because, from time to time, we're encouraged by the Scrum master to "adjust" our size / complexity estimates, both increasing and decreasing them. I'd like to hear some other developers / Scrum team members thoughts on the merits of statistics based on past sprints, and also whether they think it's appropriate to "adjust" size / complexity estimates in the middle of a sprint, or the remove all points from a story all together for situations similar to what I've just described.

    Read the article

  • PC boot: dl register and drive number

    - by kikou
    I read somewhere in the internet that, before jumping to 0x7c00, the BIOS loads into %dl the "drive number" of the booted device. But what is this "drive number"? Each device attached to the computer is assigned a number by the BIOS? If so, how can I know which number is a given device assigned to? Reading GRUB's source code I found when %dl has bits 0x80 and 0x70 set, it overwrites the whole register with 0x80. Why is that? Here is the code: jmp 3f /* grub-setup may overwrite this jump */ testb $0x80, %dl jz 2f 3: /* Ignore %dl different from 0-0x0f and 0x80-0x8f. */ testb $0x70, %dl jz 1f 2: movb $0x80, %dl 1: By the way. Is there any detailed resource on the boot process of PC's in the web? Specially about what the BIOS does before giving the control to the bootloader and also the standard codes used to communicate with it (like that "drive numer"). I was hoping to write my own bootloader and everything I found is a bit too vague, not technical enough to the point of informing of the exact state of the computer when my bootloader starts to run.

    Read the article

  • throwing exception from APCProc crashes program

    - by lazy_banana
    I started to do some research on how terminate a multithreaded application properly and I found those 2 post(first, second) about how to use QueueUserAPC to signal other threads to terminate. I thought I should give it a try, and the application keeps crashing when I throw the exception from the APCProc. Code: #include <stdio.h> #include <windows.h> class ExitException { public: char *desc; DWORD exit_code; ExitException(char *desc,int exit_code): desc(desc), exit_code(exit_code) {} }; //I use this class to check if objects are deconstructed upon termination class Test { public: char *s; Test(char *s): s(s) { printf("%s ctor\n",s); } ~Test() { printf("%s dctor\n",s); } }; DWORD CALLBACK ThreadProc(void *useless) { try { Test t("thread_test"); SleepEx(INFINITE,true); return 0; } catch (ExitException &e) { printf("Thread exits\n%s %lu",e.desc,e.exit_code); return e.exit_code; } } void CALLBACK exit_apc_proc(ULONG_PTR param) { puts("In APCProc"); ExitException e("Application exit signal!",1); throw e; return; } int main() { HANDLE thread=CreateThread(NULL,0,ThreadProc,NULL,0,NULL); Sleep(1000); QueueUserAPC(exit_apc_proc,thread,0); WaitForSingleObject(thread,INFINITE); puts("main: bye"); return 0; } My question is why does this happen? I use mingw for compilation and my OS is 64bit. Can this be the reason?I read that you shouldn't call QueueApcProc from a 32bit app for a thread which runs in a 64bit process or vice versa, but this shouldn't be the case.

    Read the article

  • jQuery ajax only works first time

    - by Michael Itzoe
    I have a table of data that on a button click certain values are saved to the database, while other values are retrieved. I need the process to be continuous, but I can only get it to work the first time. At first I was using .ajax() and .replaceWith() to rewrite the entire table, but because this overwrites the DOM it was losing events associated with the table. I cannot use .live() because I'm using stopPropagation() and .live() doesn't support it due to event bubbling. I was able to essentially re-bind the click event onto the table within the .ajax() callback, but a second call to the button click event did nothing. I changed the code to use .get() for the ajax and .html() to put the results in the table (the server-side code now returns the complete table sans the <table> tags). I no longer have to rebind the click event to the table, but subsequent clicks to the button still do nothing. Finally, I changed it to .load(), but with the same (non-) results. By "do nothing" I mean while the ajax call is returning the new HTML as expected, it's not being applied to the table. I'm sure it has something to do with altering the DOM, but I thought since I'm only overwriting the table contents and not the table object itself, it should work. Obviously I'm missing something; what is it? Edit: HTML: <table id="table1" class="mytable"> <tr> <td><span id="item1" class="myitem"></span> <td><span id="item2" class="myitem"></span> </tr> </table> <input id="Button1" type="button" value="Submit" /> jQuery: $( "Button1" ).click( function() { $( "table1" ).load( "data.aspx", function( data ) { //... } ); } );

    Read the article

  • Where is mpx386.6 and start.c in Minix 3.2?

    - by John Bowlinger
    I'm trying to follow along in Operating Systems and Implementation 3rd edition and I'm now at the part in the book where Tanenbaum is discussing bootup and kernel process switching. He keeps referring to these 2 files (mpx386.s, start.c) that are supposedly in a directory called kernel, but I can't seem to find them. In the root directory, when I go to boot/minix/3.2.0/kernel, kernel just seems to be a binary file that is illegible in terminal. There also seems to be a bunch of mod01-mod12 gz binary files as well in the 3.2.0 directory. Am I in the wrong directory, or is there something I need to install and do to read kernel? I would like to follow along with the book to what's on my screen, instead of constantly flipping back and forth. I realize alot of files are completely different from this book published in 2006 and I accept that, but this seems to be a critical juncture of the book and the operating system as a whole. If it's any consolation, I'm running the OS in Virtualbox on a 64-bit Macbook.

    Read the article

  • Merging ILists to bind on datagridview to avoid using a database view

    - by P.Bjorklund
    In the form we have this where IntaktsBudgetsType is a poorly named enum that only specifies wether to populate the datagridview after customer or product (You do the budgeting either after product or customer) private void UpdateGridView() { bs = new BindingSource(); bs.DataSource = intaktsbudget.GetDataSource(this.comboBoxKundID.Text, IntaktsBudgetsType.PerKund); dataGridViewIntaktPerKund.DataSource = bs; } This populates the datagridview with a database view that merge the product, budget and customer tables. The logic has the following method to get the correct set of IList from the repository which only does GetTable<T>.ToList<T> public IEnumerable<IntaktsBudgetView> GetDataSource(string id, IntaktsBudgetsType type) { IList<IntaktsBudgetView> list = repository.SelectTable<IntaktsBudgetView>(); switch (type) { case IntaktsBudgetsType.PerKund: return from i in list where i.kundId == id select i; case IntaktsBudgetsType.PerProdukt: return from i in list where i.produktId == id select i; } return null; } Now I don't want to use a database view since that is read-only and I want to be able to perform CRUD actions on the datagridview. I could build a class that acts as a wrapper for the whole thing and bind the different table values to class properties but that doesn't seem quite right since I would have to do this for every single thing that requires "the merge". Something pretty important (and probably basic) is missing the the thought process but after spending a weekend on google and in books I give up and turn to the SO community.

    Read the article

  • SQL to insert latest version of a group of items

    - by Garett
    I’m trying to determine a good way to handle the scenario below. I have the following two database tables, along with sample data. Table1 contains distributions that are grouped per project. A project can have one or more distributions. A distribution can have one of more accounts. An account has a percentage allocated to it. The distributions can be modified by adding or removing account, as well as changing percentages. Table2 tracks distributions, assigning a version number to each distribution. I need to be able to copy new distributions from Table1 to Table2, but only under two conditions: 1. the entire distribution does not already exist 2. the distribution has been modified (accounts added/removed or percentages changed). Note: When copying a distribution from Table1 to Table2 I need to compare all accounts and percentages within the distribution to determine if it already exists. When inserting the new distribution then I need to increment the VersionID (max(VersionID) + 1). So, in the example provided the distribution (12345, 1) has been modified, adding account number 7, as well as changing percentages allocated. The entire distribution should be copied to the second table, incrementing the VersionID to 3 in the process. The database in question is SQL Server 2005. Table1 ------ ProjectID AccountDistributionID AccountID Percent 12345 1 1 25.0 12345 1 2 25.0 12345 1 7 50.0 56789 2 3 25.0 56789 2 4 25.0 56789 2 5 25.0 56789 2 6 25.0 Table2 ------ ID VersionID Project ID AccountDistributionID AccountID Percent 1 1 12345 1 1 50.0 2 1 12345 1 2 50.0 3 2 56789 2 3 25.0 4 2 56789 2 4 25.0 5 2 56789 2 5 25.0 6 2 56789 2 6 25.0

    Read the article

  • Context problem while loading Assemblies via Structuremap

    - by Zebi
    I want to load plugins in a smiliar way as shown here however the loaded assemblies seem not to share the same context. Trying to solve the problem I just build a tiny spike containing two assemblies. One console app and one library. The console app contains the IPlugin interface and has no references to the Plugin dll. I am scanning the plugin dir using a custom Registration convention: ObjectFactory.Initialize(x => x.Scan(s => { s.AssembliesFromPath(@"..\Plugin"); s.With(new PluginScanner()); })); public void Process(Type type, Registry registry) { if (!type.Name.StartsWith("P")) return; var instance = ObjectFactory.GetInstance(type); registry.For<IPlugin>().Add((IPlugin)instance); } Which thows an invalid cast exception saying he can not convert the plugin Type to IPlugin. public class P1 : IPlugin { public void Start() { Console.WriteLine("Hello from P1"); } } Further if I just construct the instance (which works fine by the way) and try to access ObjectFactory in the plugin ObjectFactory.WhatDoIHave() shows that they don't even share the same container instance. Experimenting around with MEF, Structuremap and loading the assembly manually whith Assembly.Load("Plugin") shows if loaded with Assembly.Load it works fine. Any ideas how I can fix this to work with StructureMaps assembly scanning?

    Read the article

  • What is the proper syntax for getting a Makefile to print the output directory of one of its output zip files?

    - by 9exceptionThrower9
    I'm trying to edit an Android Makefile in the hopes of getting it to print out the directory (path) location of one the ZIP files it creates. Ideally, since the build process is long and does many things, I would like for it print out the pathway to the ZIP file to a text file in a different directory I can access later: Pseudo-code idea: # print the desired pathway to output file print(getDirectoryOf(variable-name.zip)) > ~/Desktop/location_of_file.txt The Makefile snippet where I would like to insert this new bit of code is shown below. I am interested in finding the directory of $(name).zip (that is specific file I want to locate): # ----------------------------------------------------------------- # A zip of the directories that map to the target filesystem. # This zip can be used to create an OTA package or filesystem image # as a post-build step. # name := $(TARGET_PRODUCT) ifeq ($(TARGET_BUILD_TYPE),debug) name := $(name)_debug endif name := $(name)-target_files-$(FILE_NAME_TAG) intermediates := $(call intermediates-dir-for,PACKAGING,target_files) BUILT_TARGET_FILES_PACKAGE := $(intermediates)/$(name).zip $(BUILT_TARGET_FILES_PACKAGE): intermediates := $(intermediates) $(BUILT_TARGET_FILES_PACKAGE): \ zip_root := $(intermediates)/$(name) # $(1): Directory to copy # $(2): Location to copy it to # The "ls -A" is to prevent "acp s/* d" from failing if s is empty. define package_files-copy-root if [ -d "$(strip $(1))" -a "$$(ls -A $(1))" ]; then \ mkdir -p $(2) && \ $(ACP) -rd $(strip $(1))/* $(2); \ fi endef

    Read the article

  • Completed Event not triggering for web service on some systems

    - by Farukh
    Hi, This is rather weird issue that I am facing with by WCF/Silverlight application. I am using a WCF to get data from a database for my Silverlight application and the completed event is not triggering for method in WCF on some systems. I have checked the called method executes properly has returns the values. I have checked via Fiddler and it clearly shows that response has the returned values as well. However the completed event is not getting triggered. Moreover in few of the systems, everything is fine and I am able to process the returned value in the completed method. Any thoughts or suggestions would be greatly appreciated. I have tried searching around the web but without any luck :( Following is the code.. Calling the method.. void RFCDeploy_Loaded(object sender, RoutedEventArgs e) { btnSelectFile.IsEnabled = true; btnUploadFile.IsEnabled = false; btnSelectFile.Click += new RoutedEventHandler(btnSelectFile_Click); btnUploadFile.Click += new RoutedEventHandler(btnUploadFile_Click); RFCChangeDataGrid.KeyDown += new KeyEventHandler(RFCChangeDataGrid_KeyDown); btnAddRFCManually.Click += new RoutedEventHandler(btnAddRFCManually_Click); ServiceReference1.DataService1Client ws = new BEVDashBoard.ServiceReference1.DataService1Client(); ws.GetRFCChangeCompleted += new EventHandler<BEVDashBoard.ServiceReference1.GetRFCChangeCompletedEventArgs>(ws_GetRFCChangeCompleted); ws.GetRFCChangeAsync(); this.BusyIndicator1.IsBusy = true; } Completed Event.... void ws_GetRFCChangeCompleted(object sender, BEVDashBoard.ServiceReference1.GetRFCChangeCompletedEventArgs e) { PagedCollectionView view = new PagedCollectionView(e.Result); view.GroupDescriptions.Add(new PropertyGroupDescription("RFC")); RFCChangeDataGrid.ItemsSource = view; foreach (CollectionViewGroup group in view.Groups) { RFCChangeDataGrid.CollapseRowGroup(group, true); } this.BusyIndicator1.IsBusy = false; } Please note that this WCF has lots of other method as well and all of them are working fine.... I have problem with only this method... Thanks...

    Read the article

  • SQL Query to separate data into two fields

    - by Phillip
    I have data in one column that I want to separate into two columns. The data is separated by a comma if present. This field can have no data, only one set of data or two sets of data saperated by the comma. Currently I pull the data and save as a comma delimited file then use an FoxPro to load the data into a table then process the data as needed then I re-insert the data back into a different SQL table for my use. I would like to drop the FoxPro portion and have the SQL query saperate the data for me. Below is a sample of what the data looks like. Store Amount Discount 1 5.95 1 5.95 PO^-479^2 1 5.95 PO^-479^2 2 5.95 2 5.95 PO^-479^2 2 5.95 +CA8A09^-240^4,CORDRC^-239^7 3 5.95 3 5.95 +CA8A09^-240^4,CORDRC^-239^7 3 5.95 +CA8A09^-240^4,CORDRC^-239^7 In the data above I want to sum the data in the amount field to get a gross amount. Then pull out the specific discount amount which is located between the carat characters and sum it to get the total discount amount. Then add the two together and get the total net amount. The query I want to write will separate the discount field as needed, see store 2 line 3 for two discounts being applied, then pull out the value between carat characters.

    Read the article

  • MySQL query paralyzes site

    - by nute
    Once in a while, at random intervals, our website gets completely paralyzed. Looking at SHOW FULL PROCESSLIST;, I've noticed that when this happens, there is a specific query that is "Copying to tmp table" for a loooong time (sometimes 350 seconds), and almost all the other queries are "Locked". The part I don't understand is that 90% of the time, this query runs fine. I see it going through in the process list and it finishes pretty quickly most of the time. This query is being called by an ajax call on our homepage to display product recommendations based your browsing history (a la amazon). Just sometimes, randomly (but too often), it gets stuck at "copying to tmp table". Here is a caught instance of the query that was up 109 seconds when I looked: SELECT DISTINCT product_product.id, product_product.name, product_product.retailprice, product_product.imageurl, product_product.thumbnailurl, product_product.msrp FROM product_product, product_xref, product_viewhistory WHERE ( (product_viewhistory.productId = product_xref.product_id_1 AND product_xref.product_id_2 = product_product.id) OR (product_viewhistory.productId = product_xref.product_id_2 AND product_xref.product_id_1 = product_product.id) ) AND product_product.outofstock='N' AND product_viewhistory.cookieId = '188af1efad392c2adf82' AND product_viewhistory.productId IN (24976, 25873, 26067, 26073, 44949, 16209, 70528, 69784, 75171, 75172) ORDER BY product_xref.hits DESC LIMIT 10 Of course the "cookieId" and the list of "productId" changes dynamically depending on the request. I use php with PDO.

    Read the article

< Previous Page | 791 792 793 794 795 796 797 798 799 800 801 802  | Next Page >