Search Results

Search found 22000 results on 880 pages for 'worker process'.

Page 316/880 | < Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >

  • Can FAXCOMEXLib and Windows Fax Service send a color fax?

    - by Craig
    We are in the process of testing different options for sending faxes from within our C# code (receiving faxes is not necessary). One of those options is to use FAXCOMEXLib. Without surprise, I've had pretty good success sending out black & white faxes with FAXCOMExLib. But we also have a requirement to support sending color faxes. So I execute the following code (just a snippet): IFaxDocument oFaxDoc = new FaxDocumentClass(); oFaxDoc.Body = @"C:\Test\color_image.jpg"; oFaxDoc.ConnectedSubmit(m_oFaxServer); The image is 24bit color, 1728x2304, 204x196 dpi. For the most part, this process works (with a couple of small quirks) and the fax shows up in my "Windows Fax and Scan" outbox (I'm on Vista). The problem is the image has been dithered to a 1bit black & white image. I assume that what I see in "Windows Fax and Scan" is what is actually transmitted. So is there a way to send a color fax using this technology? Are we missing a configuration option somewhere to make it work?

    Read the article

  • Generic unit test scheduling

    - by Raphink
    Hello, I'm (re)writing a program that does generic unit test scheduling. The current program is a mono-threaded Perl program, but I'm willing to modularize it and parallelize the tests. I'm also considering rewriting it in Python. Here is what I need to do: I have a list of tests, with the following attributes: uri: a URI to test (could be HTTP/HTTPS/SSH/local) ; depends: an associative array of tests/values that this test depends on ; join: a list of DB joints to be added when selecting items to process in this test ; depends_db: additional conditions to add to the DB request when selecting items to process in this test. The program builds a dependency tree, beginning with the tests that have no dependencies ; for each test: a list of items is selected from the database using the conditions (results of depending tests, joints and depends_db) ; the list of items is sent to the URI (using POST or stdin) ; the result is retrived as a YAML file listing the state and comments for the test for each tested item ; the results are stored in the DB ; the test returns, allowing depending tests to be performed. the program generates reports (CSV, DB, graphviz) of the performed tests. The primary use of this program currently is to test a fleet of machines against services such as backup, DNS, etc. The tests can then be: - backup: hosted on the backup machine(s), called through HTTP, checks if the machines' backup went well ; - DNS: hosted on the local machine, called via stdin, checks if the machines' fqdn have a valid DNS entry. Does such a tool/module already exist? What would be the best implementation to achieve this (using Perl or Python)?

    Read the article

  • Does using ReadDirectoryChangesW require administrator rights?

    - by Alex Jenter
    The MSDN says that using ReadDirectoryChangesW implies the calling process having the Backup and Restore priviliges. Does this mean that only process launched under administrator account will work correctly? I've tried the following code, it fails to enable the required privileges when running as a restricted user. void enablePrivileges() { enablePrivilege(SE_BACKUP_NAME); enablePrivilege(SE_RESTORE_NAME); } void enablePrivilege(LPCTSTR name) { HANDLE hToken; DWORD status; if (::OpenProcessToken(::GetCurrentProcess(), TOKEN_ADJUST_PRIVILEGES, &hToken)) { TOKEN_PRIVILEGES tp = { 1 }; if( ::LookupPrivilegeValue(NULL, name, &tp.Privileges[0].Luid) ) { tp.Privileges[0].Attributes = SE_PRIVILEGE_ENABLED; BOOL result = ::AdjustTokenPrivileges(hToken, FALSE, &tp, 0, NULL, NULL); verify (result != FALSE); status = ::GetLastError(); } ::CloseHandle(hToken); } } Am I doing something wrong? Is there any workaround for using ReadDirectoryChangesW from a non-administrator user account? It seems that the .NET's FileSystemWatcher can do this. Thanks!

    Read the article

  • Handling XMLHttpRequest to call external application

    - by Ian
    I need a simple way to use XMLHttpRequest as a way for a web client to access applications in an embedded device. I'm getting confused trying to figure out how to make something thin and light that handles the XMLHttpRequests coming to the web server and can translate those to application calls. The situation: The web client using Ajax (ExtJS specifically) needs to send and receive asynchronously to an existing embedded application. This isn't just to have a thick client/thin server, the client needs to run background checking on the application status. The application can expose a socket interface, with a known set of commands, events, and configuration values. Configuration could probably be transmitted as XML since it comes from a SQLite database. In between the client and app is a lighttpd web server running something that somehow handles the translation. This something is the problem. What I think I want: Lighttpd can use FastCGI to route all XMLHttpRequest to an external process. This process will understand HTML/XML, and translate between that and the application's language. It will have custom logic to simulate pushing notifications to the client (receive XMLHttpRequest, don't respond until the next notification is available). C/C++. I'd really like to avoid installing Java/PHP/Perl on an embedded device. So I'll need more low level understanding. How do I do this? Are there good C++ libraries for interpreting the CGI headers and HTML so that I don't have to do any syntax processing, I can just deal with the request/response contents? Are there any good references to exactly what goes on, server side, when handling the XMLHttpRequest and CGI interfaces? Is there any package that does most of this job already, or will I have to build the non-HTTP/CGI stuff from scratch? Thanks for any help! I am really having trouble learning about the server side of these technologies.

    Read the article

  • IIS Digest repeatedly asking for authentication

    - by David Budiac
    I have a development copy of an ASP.NET intranet site checked out and running on my local machine. We're using digest authentication to allow users to log in using their active directory accounts. On my development copy only, Digest sometimes will repeatedly prompt for login information usually ~9 times per page request. After repeatedly logging in (or it also works to cancel out of 8 out of the 9 prompts), I can use the site as normal. I cannot pinpoint what is triggering the issue. Sometimes this problem triggers upon the next page request, sometimes after I edited/saved/refreshed a page, and sometimes it doesn't happen at all. Each prompt triggers several logon (Event ID 4624 & 4672) security events in the Events Viewer. Shortly after each burst of logon events, I'll see a burst of logoff events (Event ID A co-worker who has a nearly an identical setup (Windows 7, IIS 7) is not experiencing the issue. Our production copy (that is running on a different server) also does not experience the issue. We've tried to compare our settings in IIS, not really finding any differences. I'm using chrome but I've experienced the issue in other browsers.

    Read the article

  • "Scheduling restart of crashed service", but no call to onStart() follows

    - by kostmo
    In the 1.6 API, is there a way to ensure that the onStart() method of a Service is called after the service is killed due to memory pressure? From the logs, it seems that the "process" that the service belongs to is restarted, but the service itself is not. I have placed a Log.d() call in the onStart() method, and this is not reached. To test my service under memory pressure, I spawn it from an activity, then launch the web browser and visit some Javascript-heavy websites like Slashdot until my service is killed. The logcat reads: 03-07 16:44:13.778: INFO/ActivityManager(52): Process com.kostmo.charbuilder.full (pid 2909) has died. 03-07 16:44:13.778: WARN/ActivityManager(52): Scheduling restart of crashed service com.kostmo.charbuilder.full/com.kostmo.charbuilder.DownloadImagesService in 5000ms 03-07 16:44:13.778: INFO/ActivityManager(52): Low Memory: No more background processes. 03-07 16:44:13.778: ERROR/ActivityThread(52): Failed to find provider info for android.server.checkin 03-07 16:44:13.778: WARN/Checkin(52): Can't log event SYSTEM_SERVICE_LOOPING: java.lang.IllegalArgumentException: Unknown URL content://android.server.checkin/events 03-07 16:44:18.908: INFO/ActivityManager(52): Start proc com.kostmo.charbuilder.full for service com.kostmo.charbuilder.full/com.kostmo.charbuilder.DownloadImagesService: pid=3560 uid=10027 gids={3003, 1015} 03-07 16:44:19.868: DEBUG/ddm-heap(3560): Got feature list request 03-07 16:44:20.128: INFO/ActivityThread(3560): Publishing provider com.kostmo.charbuilder.full.provider.character: com.kostmo.charbuilder.provider.ImageFileContentProvider

    Read the article

  • How to work threading with ConcurrentQueue<T>.

    - by dboarman
    I am trying to figure out what the best way of working with a queue will be. I have a process that returns a DataTable. Each DataTable, in turn, is merged with the previous DataTable. There is one problem, too many records to hold until the final BulkCopy (OutOfMemory). So, I have determined that I should process each incoming DataTable immediately. Thinking about the ConcurrentQueue<T>...but I don't see how the WriteQueuedData() method would know to dequeue a table and write it to the database. For instance: public class TableTransporter { private ConcurrentQueue<DataTable> tableQueue = new ConcurrentQueue<DataTable>(); public TableTransporter() { tableQueue.OnItemQueued += new EventHandler(WriteQueuedData); // no events available } public void ExtractData() { DataTable table; // perform data extraction tableQueue.Enqueue(table); } private void WriteQueuedData(object sender, EventArgs e) { BulkCopy(e.Table); } } My first question is, aside from the fact that I don't actually have any events to subscribe to, if I call ExtractData() asynchronously will this be all that I need? Second, is there something I'm missing about the way ConcurrentQueue<T> functions and needing some form of trigger to work asynchronously with the queued objects?

    Read the article

  • rewrite not a member of LiftRules

    - by José Leal
    Hi guys, I was following http://www.assembla.com/wiki/show/liftweb/URL_Rewriting tutorial for url rewritting in liftweb.. but I get this error: error: value rewrite is not a member of object net.liftweb.http.LiftRules .. it is really odd.. and the documentation says that it exists. I'm using idea IDE, and I've done everything from scratch, using the lift maven blank archifact. Some more info: [INFO] ------------------------------------------------------------------------ [INFO] Building Joseph3 [INFO] task-segment: [tomcat:run] [INFO] ------------------------------------------------------------------------ [INFO] Preparing tomcat:run [INFO] [resources:resources {execution: default-resources}] [WARNING] Using platform encoding (UTF-8 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] Copying 0 resource [INFO] [yuicompressor:compress {execution: default}] [INFO] nb warnings: 0, nb errors: 0 [INFO] artifact org.mortbay.jetty:jetty: checking for updates from scala-tools.org [INFO] artifact org.mortbay.jetty:jetty: checking for updates from central [INFO] [compiler:compile {execution: default-compile}] [INFO] Nothing to compile - all classes are up to date [INFO] [scala:compile {execution: default}] [INFO] Checking for multiple versions of scala [INFO] /home/dpz/Scala/Doit/Joseph3/src/main/scala:-1: info: compiling [INFO] Compiling 2 source files to /home/dpz/Scala/Doit/Joseph3/target/classes at 1274922123910 [ERROR] /home/dpz/Scala/Doit/Joseph3/src/main/scala/bootstrap/liftweb/Boot.scala:16: error: value rewrite is not a member of object net.liftweb.http.LiftRules [INFO] LiftRules.rewrite.prepend(NamedPF("ProductExampleRewrite") { [INFO] ^ [ERROR] one error found [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] wrap: org.apache.commons.exec.ExecuteException: Process exited with an error: 1(Exit value: 1) [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 19 seconds [INFO] Finished at: Thu May 27 03:02:07 CEST 2010 [INFO] Final Memory: 20M/175M [INFO] ------------------------------------------------------------------------ Process finished with exit code 1 enter code here

    Read the article

  • Celery daemon as a Ubuntu service does not consume tasks while running from terminal does

    - by Guy
    On Ubuntu 11.10, I have to issue python tasks from django using celery. I'm currently testing on the same machine but eventually the celery worker should run on a remote machine. django uses the following settings: BROKER_HOST = "127.0.0.1" BROKER_PORT = 5672 BROKER_VHOST = "/my_vhost" BROKER_USER = "celery" BROKER_PASSWORD = "celery" I can also see my task queued in http://localhost:55672/#/queues the celery daemon uses the following configuration (celeryconfig.py): BROKER_HOST = "127.0.0.1" BROKER_PORT = 5672 BROKER_USER = "celery" BROKER_PASSWORD = "celery" BROKER_VHOST = "/my_vhost" CELERY_RESULT_BACKEND = "amqp" import os import sys sys.path.append(os.getcwd()) CELERY_IMPORTS = ("tasks", ) running celeryd -l info works well and now I want to run it as a service. I've followed the instructions from http://ask.github.com/celery/cookbook/daemonizing.html and now I'm trying to run it using: sudo /etc/init.d/celeryd start But the message is not being consumed, no error in the celery log either. /etc/default/celeryd CELERYD_NODES="w1" CELERYD_CHDIR="/path/to/django/project" CELERYD_OPTS="--time-limit=300 --concurrency=1" CELERY_CONFIG_MODULE="celeryconfig" # %n will be replaced with the nodename. CELERYD_LOG_FILE="/var/log/celery/%n.log" CELERYD_PID_FILE="/var/run/celery/%n.pid" # Workers should run as an unprivileged user. CELERYD_USER="celery" CELERYD_GROUP="celery" I've also created user celery in Ubuntu not sure if its necessary. Any help will be appreciated, Thanks, Guy

    Read the article

  • Can a standalone ruby script (windows and mac) reload and restart itself?

    - by user30997
    I have a master-workers architecture where the number of workers is growing on a weekly basis. I can no longer be expected to ssh or remote console into each machine to kill the worker, do a source control sync, and restart. I would like to be able to have the master place a message out on the network that tells each machine to sync and restart. That's where I hit a roadblock. If I were using any sane platform, I could just do: exec('ruby', __FILE__) ...and be done. However, I did the following test: p Process.pid sleep 1 exec('ruby', __FILE__) ...and on Windows, I get one ruby instance for each call to exec. None of them die until I hit ^C on the window in question. On every platform I tried this on, it is executing the new version of the file each time, which I have verified this by making simple edits to the test script while the test marched along. The reason I'm printing the pid is to double-check the behavior I'm seeing. On windows, I am getting a different pid with each execution - which I would expect, considering that I am seeing a new process in the task manager for each run. The mac is behaving correctly: the pid is the same for every system call and I have verified with dtrace that each run is trigging a call to the execve syscall. So, in short, is there a way to get a windows ruby script to restart its execution so it will be running any code - including itself - that has changed during its execution? Please note that this is not a rails application, though it does use activerecord.

    Read the article

  • Temporary storage for keeping data between program iterations?

    - by mr.b
    I am working on an application that works like this: It fetches data from many sources, resulting in pool of about 500,000-1,500,000 records (depends on time/day) Data is parsed Part of data is processed in a way to compare it to pre-existing data (read from database), calculations are made, and stored in database. Resulting dataset that has to be stored in database is, however, much smaller in size (compared to original data set), and ranges from 5,000-50,000 records. This process almost always updates existing data, perhaps adds few more records. Then, data from step 2 should be kept somehow, somewhere, so that next time data is fetched, there is a data set which can be used to perform calculations, without touching pre-existing data in database. I should point out that this data can be lost, it's not irreplaceable (key information can be read from database if needed), but it would speed up the process next time. Application components can (and will be) run off different computers (in the same network), so storage has to be reachable from multiple hosts. I have considered using memcached, but I'm not quite sure should I do so, because one record is usually no smaller than 200 bytes, and if I have 1,500,000 records, I guess that it would amount to over 300 MB of memcached cache... But that doesn't seem scalable to me - what if data was 5x that amount? If it were to consume 1-2 GB of cache only to keep data in between iterations (which could easily happen)? So, the question is: which temporary storage mechanism would be most suitable for this kind of processing? I haven't considered using mysql temporary tables, as I'm not sure if they can persist between sessions, and be used by other hosts in network... Any other suggestion? Something I should consider?

    Read the article

  • Simulating O_NOFOLLOW (2): Is this other approach safe?

    - by Daniel Trebbien
    As a follow-up question to this one, I thought of another approach which builds off of @caf's answer for the case where I want to append to file name and create it if it does not exist. Here is what I came up with: Create a temporary directory with mode 0700 in a system temporary directory on the same filesystem as file name. Create an empty, temporary, regular file (temp_name) in the temporary directory (only serves as placeholder). Open file name for reading only, just to create it if it does not exist. The OS may follow name if it is a symbolic link; I don't care at this point. Make a hard link to name at temp_name (overwriting the placeholder file). If the link call fails, then exit. (Maybe someone has come along and removed the file at name, who knows?) Use lstat on temp_name (now a hard link). If S_ISLNK(lst.st_mode), then exit. open temp_name for writing, append (O_WRONLY | O_APPEND). Write everything out. Close the file descriptor. unlink the hard link. Remove the temporary directory. (All of this, by the way, is for an open source project that I am working on. You can view the source of my implementation of this approach here.) Is this procedure safe against symbolic link attacks? For example, is it possible for a malicious process to ensure that the inode for name represents a regular file for the duration of the lstat check, then make the inode a symbolic link with the temp_name hard link now pointing to the new, symbolic link? I am assuming that a malicious process cannot affect temp_name.

    Read the article

  • Qt/MFC Migration Framework tool: properly exiting DLL?

    - by User
    I'm using the Qt/MFC Migration Framework tool following this example: http://doc.qt.nokia.com/solutions/4/qtwinmigrate/winmigrate-qt-dll-example.html The dll I build is loaded by a 3rd party MFC-based application. The 3rd party app basically calls one of my exported DLL functions to startup my plugin and another function to shutdown my application. Currently I'm doing nothing in my shutdown function. When I load my DLL in the 3rd party app the startup function is called and my DLL starts successfully and I can see my message box. However if I shutdown my plugin and then try to start it again I get the following error: Debug Error! Program: <my 3rd party app> Module: 4.7.1 File: global\qglobal.cpp Line: 2262 ASSERT failure in QWidget: "Widgets must be created in the GUI thread.", file kernel\qwidget.cpp line 1233 (Press Retry to debug the application) Abort Retry Ignore This makes me think I'm not doing something to properly shutdown my plugin. What do I need to do to shut it down properly? UPDATE: http://doc.qt.nokia.com/solutions/4/qtwinmigrate/winmigrate-walkthrough.html says: The DLL also has to make sure that it can be loaded together with other Qt based DLLs in the same process (in which case a QApplication object will probably exist already), and that the DLL that creates the QApplication object remains loaded in memory to avoid other DLLs using memory that is no longer available to the process. So I wonder if there is some problem where I need to somehow keep the original DLL loaded no matter what?

    Read the article

  • Performance question: Inverting an array of pointers in-place vs array of values

    - by Anders
    The background for asking this question is that I am solving a linearized equation system (Ax=b), where A is a matrix (typically of dimension less than 100x100) and x and b are vectors. I am using a direct method, meaning that I first invert A, then find the solution by x=A^(-1)b. This step is repated in an iterative process until convergence. The way I'm doing it now, using a matrix library (MTL4): For every iteration I copy all coeffiecients of A (values) in to the matrix object, then invert. This the easiest and safest option. Using an array of pointers instead: For my particular case, the coefficients of A happen to be updated between each iteration. These coefficients are stored in different variables (some are arrays, some are not). Would there be a potential for performance gain if I set up A as an array containing pointers to these coefficient variables, then inverting A in-place? The nice thing about the last option is that once I have set up the pointers in A before the first iteration, I would not need to copy any values between successive iterations. The values which are pointed to in A would automatically be updated between iterations. So the performance question boils down to this, as I see it: - The matrix inversion process takes roughly the same amount of time, assuming de-referencing of pointers is non-expensive. - The array of pointers does not need the extra memory for matrix A containing values. - The array of pointers option does not have to copy all NxN values of A between each iteration. - The values that are pointed to the array of pointers option are generally NOT ordered in memory. Hopefully, all values lie relatively close in memory, but *A[0][1] is generally not next to *A[0][0] etc. Any comments to this? Will the last remark affect performance negatively, thus weighing up for the positive performance effects?

    Read the article

  • Signals and Variables in VHDL (order) - Problem

    - by Morano88
    I have a signal and this signal is a bitvector (Z). The length of the bitvector depends on an input n, it is not fixed. In order to find the length, I have to do some computations. Can I define a signal after defining the variables ? It is giving me errors when I do that. It is working fine If I keep the signal before the variables (that what is showing below) .. but I don't want that .. the length of Z depends on the computations of the variables. What is the solution ? library IEEE; use IEEE.STD_LOGIC_1164.ALL; use IEEE.STD_LOGIC_ARITH.ALL; use IEEE.STD_LOGIC_UNSIGNED.ALL; entity BSD_Full_Comp is Generic (n:integer:=8); Port(X, Y : inout std_logic_vector(n-1 downto 0); FZ : out std_logic_vector(1 downto 0)); end BSD_Full_Comp; architecture struct of BSD_Full_Comp is Component BSD_BitComparator Port ( Ai_1 : inout STD_LOGIC; Ai_0 : inout STD_LOGIC; Bi_1 : inout STD_LOGIC; Bi_0 : inout STD_LOGIC; S1 : out STD_LOGIC; S0 : out STD_LOGIC ); END Component; Signal Z : std_logic_vector(2*n-3 downto 0); begin ass : process Variable length : integer := n; Variable pow : integer :=0 ; Variable ZS : integer :=0; begin while length /= 0 loop length := length/2; pow := pow+1; end loop; length := 2 ** pow; ZS := length - n; wait; end process; end struct;

    Read the article

  • bind9 named.conf zones size limit

    - by mox601
    I am trying to set up a test environment on my local machine, and I am trying to start a DNS daemon that loads tha configuration from a named.conf.custom file. As long as the size of that file is like 3-4 zones, the bind9 daemon loads fine, but when i enter the config file i need (like 10000 lines long), bind can't startup and in the syslog i find this message: starting BIND 9.7.0-P1 -u bind Jun 14 17:06:06 cibionte-pc named[9785]: built with '--prefix=/usr' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--sysconfdir=/etc/bind' '--localstatedir=/var' '--enable-threads' '--enable-largefile' '--with-libtool' '--enable-shared' '--enable-static' '--with-openssl=/usr' '--with-gssapi=/usr' '--with-gnu-ld' '--with-dlz-postgres=no' '--with-dlz-mysql=no' '--with-dlz-bdb=yes' '--with-dlz-filesystem=yes' '--with-dlz-ldap=yes' '--with-dlz-stub=yes' '--with-geoip=/usr' '--enable-ipv6' 'CFLAGS=-fno-strict-aliasing -DDIG_SIGCHASE -O2' 'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS=' Jun 14 17:06:06 cibionte-pc named[9785]: adjusted limit on open files from 1024 to 1048576 Jun 14 17:06:06 cibionte-pc named[9785]: found 1 CPU, using 1 worker thread Jun 14 17:06:06 cibionte-pc named[9785]: using up to 4096 sockets Jun 14 17:06:06 cibionte-pc named[9785]: loading configuration from '/etc/bind/named.conf' Jun 14 17:06:06 cibionte-pc named[9785]: /etc/bind/named.conf.saferinternet:1: unknown option 'zone' Jun 14 17:06:06 cibionte-pc named[9785]: loading configuration: failure Jun 14 17:06:06 cibionte-pc named[9785]: exiting (due to fatal error) Are there any limits on the file size bind9 is allowed to load?

    Read the article

  • Unable to run .exe application using C# code

    - by bjh Hans
    I have an exe that i need to call from my C# Program with two arguments(PracticeId,ClaimId) for example Suppose if i have an application "test.exe" , whose functionality is to make claim acording to given two argument On cmd i would normally give the following command as: test.exe 1 2 and it works fine and performs its job of conversion. but what if i want to execute the same thing using my c# code. i am using the following sample code: Process compiler = new Process(); compiler.StartInfo.FileName = "test.exe" ; compiler.StartInfo.Arguments = "1 2" ; compiler.StartInfo.UseShellExecute = true; compiler.StartInfo.RedirectStandardOutput = true; compiler.Start(); when i try to invoke the test.exe using the above code , it fails to perform its operation of making claim txt file what is the issue in this i don' know pls help me regarding this whether the problem of threding or not i don't know. Can anyone tell me if i need to add anything more to the above code It would be great if somebody could provide some help on the above topic.

    Read the article

  • How to migrate primary key generation from "increment" to "hi-lo"?

    - by Bevan
    I'm working with a moderate sized SQL Server 2008 database (around 120 tables, backups are around 4GB compressed) where all the table primary keys are declared as simple int columns. At present, primary key values are generated by NHibernate with the increment identity generator, which has worked well thus far, but precludes moving to a multiprocessing environment. Load on the system is growing, so I'm evaluating the work required to allow the use of multiple servers accessing a common database backend. Transitioning to the hi-lo generator seems to be the best way forward, but I can't find a lot of detail about how such a migration would work. Will NHibernate automatically create rows in the hi-lo table for me, or do I need to script these manually? If NHibernate does insert rows automatically, does it properly take account of existing key values? If NHibernate does take care of thing automatically, that's great. If not, are there any tools to help? Update NHibernate's increment identifier generator works entirely in-memory. It's seeded by selecting the maximum value of used identifiers from the table, but from that point on allocates new values by a simple increment, without reference back to the underlying database table. If any other process adds rows to the table, you end up with primary key collisions. You can run multiple threads within the one process just fine, but you can't run multiple processes. For comparison, the NHibernate identity generator works by configuring the database tables with identity columns, putting control over primary key generation in the hands of the database. This works well, but compromises the unit of work pattern. The hi-lo algorithm sits inbetween these - generation of primary keys is coordinated through the database, allowing for multiprocessing, but actual allocation can occur entirely in memory, avoiding problems with the unit of work pattern.

    Read the article

  • web design question (php/ajax)

    - by tom smith
    Hi guys... Hope this isn't a waste of your time. I'm working on a project, and it occured to me that there's a chunk of code out there, that should allow me to see how others have implemented this. I've got a project where I'm going to have a page, with a sel box. the user will select an item from the selList, and based on the item selected, a separate section of the page (areaB) will change in terms of the content/tbls being displayed. i then want to allow the user to go through a series of subpages in areaB, where the user goes through a submit/cancel/confirm process, where the stuff in areaB changes, with the rest of the page remaining the same... i'm trying to figure out the best approach to implement the on both client/server side. i could just have an ugly "if block" where i have abunch of logic, and i completely regen the page each time the user selects an action.. i could have an approach that might involve divs/frames, where i then just regen the targeted frame/div area.. is this even possible?? i could have some form of ajaxy process, which would only alter the targeted section(s) of the page... so.. i'm trying to talk to anyone who has ideas on how to do this, or more ideally, if you know of a good code (client/server) side example of this... that i can examine. i'd really appreciate it!! i've got a more detailed overview but didn't know if it would be cool to post it here... thanks.. tom

    Read the article

  • Nested namespaces, correct static library design issues

    - by PeterK
    Hello all, I'm currently in the process of developing a fairly large static library which will be used by some tools when it's finished. Now since this project is somewhat larger than anything i've been involved in so far, I realized its time to think of a good structure for the project. Using namespaces is one of those logical steps. My current approach is to divide the library into parts (which are not standalone, but their purpose calls for such a separation). I have a 'core' part which now just holds some very common typedefs and constants (used by many different parts of the library). Other parts are for example some 'utils' (hash etc.), file i/o and so on. Each of these parts has its own namespace. I have nearly finished the 'utils' part and realized that my approach probably is not the best. The problem (if we want to call it so) is that in the 'utils' namespace i need something from the 'core' namespace which results in including the core header files and many using directives. So i began to think that this probably is not a good thing and should be changed somehow. My first idea is to use nested namespaces as to have something like core::utils. Since this will require some heavy refactoring i want to ask here first. What do you think? How would you handle this? Or more generally: How to correctly design a static library in terms of namespaces and code organization? If there are some guidelines or articles about it, please mentoin them too. Thanks. Note: i'm quite sure that there are more good approaches than just one. Feel free to post your ideas, suggestions etc. Since i'm designing this library i want it to be really good. The goal is to make it as clean and FAST as possible. The only problem is that i will have to integrate a LOT of existing code and refactor it, which will really be a painful process (sigh) - thats why good structure is so important)

    Read the article

  • Do the new NoPIA and Type Equivalence features in C#/.NET 4.0 mean Microsoft.mshtml.dll is no longer

    - by jpierson
    I'm maintaining a WPF based application which contains a WinForms based WebBrowser control that based on the IE web browser control. When we deploy, we have had to also supply Microsoft.mshtml.dll and do some custom configuration stuff for our ClickOnce publishing process as well in order to get things to work. I'm curious that with the new NoPIA and Type Equivalence features and dynamic type capabilities in C# 4.0 can we expect that if we upgrade that we can remove the dependencies on the Microsoft.mshtml.dll assembly? If so this will not only reduce the size of our deployment quite a bit but will also simplify our publishing process as well. It is my understanding that we should be able embed the types that normally get automatically generated into extra assemblies for COM types such as the MapPoint Control by Visual Studio. I don't know if this also applies to the Microsoft.mshtml.dll or even how it is done even in the most simple of cases. If somebody could provide an explanation about what the practical impact of these new features are on a project that relies on COM interop and especially the Microsoft.mshtml.dll assembly it would be of great help to me.

    Read the article

  • Website has become slower on a VPS, was much fast on a shared host. What's wrong?

    - by Arpit Tambi
    My shared host suspended my website stating system overload, so I moved my website to a VPS which has 4GB RAM. But for some reason the website has become very slow. This is the vmstat output - procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 1 0 0 3050500 0 0 0 0 0 1 0 0 0 0 100 0 0 Here's the Apache Benchmark output for a STATIC html page I ran on the server itself - Benchmarking www.ask-oracle.com (be patient)...apr_poll: The timeout specified has expired (70007) Total of 20 requests completed Update: Server Config: List item Centos 5.6 4 cores cpu 4 GB RAM LAMP stack with APC Wordpress Only one website It takes almost double time to load now, same website was much fast on shared hosting. I know I need to tweak some settings but have no clue where to start from? I have already tried to optimize apache, mysql etc. Update 2: CPU usage is low, see uptime output: 11:09:02 up 7 days, 21:26, 1 user, load average: 0.09, 0.11, 0.09 Update 3: When I load any webpage, browser shows "Waiting" for a long time and then page loads quickly. So I suspect server can accept only limited connections and holds extra connections in a waiting state. How to check this? Update 4: Following is the output on executing netperf TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost.localdomain (127.0.0.1) port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 10.00 9615.40 [root@ip-118-139-177-244 j3ngn5ri6r01t3]# Here are the Apache MPM settings from httpd.conf, do they look okay? <IfModule worker.c> StartServers 5 MaxClients 100 MinSpareThreads 50 MaxSpareThreads 250 ThreadsPerChild 125 MaxRequestsPerChild 10000 ServerLimit 100 </IfModule>

    Read the article

  • WHMCS - Mapping of a Manually Created Invoice with its Corresponding Domain

    - by Knowledge Craving
    I am using WHMCS version 4.2.1 for maintaining domain registration & website hosting services, in one of my websites. Currently, the automatic domain registration process is working great for both the existing & new clients. The main way to register domains automatically is to mark the respective invoices as paid, and automatically the domains get registered through my selected registrar. Recently, I am facing a major problem in domain renewals, where the domain has been already registered by us in the past. The problem is that some of the corresponding invoices are not already generated for the domains & so I have to manually create invoice for each of those domain renewals. However, I am unable to map that invoice with the corresponding domain. This is required because unless the domain knows that for its renewal, an invoice has been created, the marking of the invoice as paid will not instantiate the automatic renewal process through my registrar. Can anybody please tell me the probable way of mapping the invoice with its corresponding domain? I've tried to explain the problem as it is occurring in the best way possible. Still, if any more information regarding this is required, please ask. Any help is greatly appreciated.

    Read the article

  • MYSQL not running on Ubuntu OS - Error 2002.

    - by mgj
    Hi, I am a novice to mysql DB. I am trying to run the MYSQL Server on Ubuntu 10.04. Through Synaptic Package Manager I am have installed the mysql version: mysql-client-5.1 I wonder that how was the database password set for the mysql-client software that I installed through the above way.It would be nice if you could enlighten me on this. When I tried running this database, I encountered the error given below: mohnish@mohnish-laptop:/var/lib$ mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) mohnish@mohnish-laptop:/var/lib$ I referred to a similar question posted by another user. I didn't find a solution through the proposed answers. For instance when I tried the solutions posted for the similar question I got the following: mohnish@mohnish-laptop:/var/lib$ service start mysqld start: unrecognized service mohnish@mohnish-laptop:/var/lib$ ps -u mysql ERROR: User name does not exist. ********* simple selection ********* ********* selection by list ********* -A all processes -C by command name -N negate selection -G by real group ID (supports names) -a all w/ tty except session leaders -U by real user ID (supports names) -d all except session leaders -g by session OR by effective group name -e all processes -p by process ID T all processes on this terminal -s processes in the sessions given a all w/ tty, including other users -t by tty g OBSOLETE -- DO NOT USE -u by effective user ID (supports names) r only running processes U processes for specified users x processes w/o controlling ttys t by tty *********** output format ********** *********** long options *********** -o,o user-defined -f full --Group --User --pid --cols --ppid -j,j job control s signal --group --user --sid --rows --info -O,O preloaded -o v virtual memory --cumulative --format --deselect -l,l long u user-oriented --sort --tty --forest --version -F extra full X registers --heading --no-heading --context ********* misc options ********* -V,V show version L list format codes f ASCII art forest -m,m,-L,-T,H threads S children in sum -y change -l format -M,Z security data c true command name -c scheduling class -w,w wide output n numeric WCHAN,UID -H process hierarchy mohnish@mohnish-laptop:/var/lib$ which mysql /usr/bin/mysql mohnish@mohnish-laptop:/var/lib$ mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) I even tried referring to http://forums.mysql.com/read.php?11,27769,84713#msg-84713 but couldn't find anything useful. Please let me know how I could tackle this error. Thank you very much..

    Read the article

  • Domain Transfer Protection - need advice

    - by Jack
    Hey, I am about to purchase a domain name for a bit of money. I do not personally know the person who I am purchasing the domain name from, we have only chatted via email. The proposed process for the transfer is: The owner of the domain lowest the domain name security and emails me the domain password, I request the transfer After the request, I transfer the money via PayPal When the money has been cleared the current domain name owner confirms the transfer via the link that he receives in that email I wait for it to be transferred. The domain is currently registered with DirectNIC - http://www.directnic.com/ Is this the best practice? Seeing I am paying a bit of money for this domain name, I am worried that after the money has been cleared that I won't see the domain name or hear from the current domain name owner again. Is there a 'domain governing body' which I can report to if this is the case? Is the proposed transfer process the best solution? Any advice would be awesome. Thanks! Jack

    Read the article

< Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >