Search Results

Search found 25660 results on 1027 pages for 'booting issue'.

Page 317/1027 | < Previous Page | 313 314 315 316 317 318 319 320 321 322 323 324  | Next Page >

  • DB2 Driver Connection Hanging in Glassfish Connection Pool

    - by Ant
    We have an intermittent issue around the DB2 used from a Glassfish connection pool. What happens is this: Under situations where the database (DB2 on ZOS) is under stress, our application (which is a multi-threaded application using connections to DB2 via a Glassfish connection pool) stops doing anything. The following are observed: 1) Looking at the server using JConsole, we can see a thread waiting indefinitely in the DB2 driver's getConnection() method. We can also see that it has gained a lock on a Vector within the driver. Several other threads are also calling the getConnection() method in the driver, and are hanging waiting for the lock on the Vector to be released. 2) Looking at the database itself, we can see that there are connections from the Glassfish server open and waiting to be used. It seems that there is some sort of mismatch between the connection pool on Glassfish and the connections actually open to DB2. Has anyone come across this issue before? Or something similar? If you need any more information that I haven't provided, then please let me know!

    Read the article

  • Console Errors - Not a Jquery Guru Yet

    - by user2528902
    I am hoping that someone can help me to correct some issues that I am having with a custom script. I took over the management of a site and there seems to be an issue with the following code: /* jQUERY CUSTOM FUNCTION ------------------------------ */ jQuery(document).ready(function($) { $('.ngg-gallery-thumbnail-box').mouseenter(function(){ var elmID = "#"+this.id+" img"; $(elmID).fadeOut(300); }); $('.ngg-gallery-thumbnail-box').mouseleave(function(){ var elmID = "#"+this.id+" img"; $(elmID).fadeIn(300); }); var numbers = $('.ngg-gallery-thumbnail-box').size(); function A(i){ setInterval(function(){autoSlide(i)}, 7000); } A(0); function autoSlide(i) { var numbers = $('.ngg-gallery-thumbnail-box').size(); var elmCls = $("#ref").attr("class"); $(elmCls).fadeIn(300); var randNum = Math.floor((Math.random()*numbers)+1); var elmClass = ".elm"+randNum+" img"; $("#ref").attr("class", elmClass); $(elmClass).fadeOut(300); setInterval(function(){arguments.callee.caller(randNum)}, 7000); } }); The error that I am seeing in the console on Firebug is "TypeError: arguments.callee.caller is not a function. I am just getting started with jQuery and have no idea how to fix this issue. Any assistance with altering the code so that it still works but doesn't throw up all of these errors (if I load the site and let it sit in my browser for 10 minutes I have over 10000 errors in the console) would be greatly appreciated!

    Read the article

  • Very weird C file-handling anomaly

    - by KáGé
    Hello, I got a very weird issue that I cant figure out in my school project, which is the simulation of a simple filesystem in a human-readable textfile. Unfortunately I don't yet have enough time to translate the comments in my code or make it less gibberish, so if you are bothered by that, you don't have to help, I understand. See the code HERE. Now in drive.h, at line 574 is this part: i = getline(); #ifdef DEBUG printf("Free space in all found at %d.\n\n", i); if(drive.disk != NULL){ printf("Disk OK\n\n"); } #endif //write in data state = seekline(i); Before this it finds place for the allocation database entry in the ALL sector (see the "image files" in the mounts folder, this issue was tested on mount_30.efs-dbf), then gets the line with i = getline() fine (getline is in lglobal.h, line 39), but after that any file manipulation (in this case seekline's fseek, but if I comment that out, then the first fprintf after that) crashes the program straight away. I think the file gets somehow corrupted (though the Disk OK message appears) but can't figure out how. I've tried putting i = getline(); into comment, but it didn't make any difference. I've also tried asking at local programming forums but they didn't really help either. The last few lines of the output before it crashes: Dir written. (drive.h line 562) Seekline entered: 268 (called at drive.h line 564) Getline entered. (called at drive.h line 574) Line got: 268. Free space in all found at 268. (drive.h line 576) Seekline entered: 268 (called at drive.h line 582, note that this exact call was run successfully less than 20 lines back. This one should set the pointer to the beginning of the line it is currently in) After this it crashes. Does anyone has any idea of what causes this and how could I fix it? Thank you.

    Read the article

  • CM and Agile validation process of merging to the Trunk?

    - by LoneCM
    Hello All, We are a new Agile shop and we are encountering an issue that I hope others have seen. In our process, the Trunk is considered an integration branch; it does not have to be releasable, but it does have to be stable and functional for others to branch off of. We create Feature branches of the Trunk for new development. All work and testing occurs in these branches. An individual branch pulls up as needed to stay integrated with the Trunk as other features that are accepted and are committed. But now we have numerous feature branches. Each are focused, have a short life cycle, and are pushed to the trunk as they are completed, so we not debating the need for the branches and trying very much to be Agile. My issue comes in here: I require that the branches pull up from the Trunk at the end of their life cycle and complete the validation, regression testing and handle all configuration issues before pushing to the trunk. Once reintegrated into the Trunk, I ask for at least a build and an automated smoke test. However, I am now getting push back on the Trunk validation. The argument is that the developers can merge the code and not need the QA validation steps because they already complete the work in the feature branch. Therefore, the extra testing is not needed. I have attempted to remind management of the numerous times "brainless" merges have failed. Thier solution is to instead of build and regression testing to have the developer diff the Feature branch and the newly merged Trunk. That process in thier mind would replace the regression testing I asked for. So what do you require when you reintegrate back to the Trunk? What are the issues that we will encounter if we remove this step and replace with the diff? Is the cost of staying Agile the additional work of the intergration of the branches? Thanks for any input. LoneCM

    Read the article

  • Alternate way to create a clone of a UNIX System

    - by Spirit
    THE STORY: (If you don't like to read much, down below is the question :) ) Where I work we have two HP RP2470 servers same hardware same number of hard drives same everything :). One of them is a production server and runs HP-UX 11.00. The poor ba***rd hasn't been turned off for years and now I have to make a clone of it on the other server - just in case, for redundancy. The problem is simple (or not simple) as I have to make the the other server exactly the same. However the old version of OS (UX 11.00 is a history now) and the old software running on it, have made my task almost impossible. On the production server there is also a cloning/recover utility Ignite-UX. I tried many times to create a recovery tape with it. Then when I load the tape on the backup server, it succeeds with the loading of the tape (no errors no warnings) but on the next restart it fails to load the OS :S and drops into HP`s ISL prompt. --- THE QUESTION: Is there an alternate way to create a clone of the Unix System? The environment is: 1. 2x HP RP2470 Servers (non-Intel), same hardware, same number od HDDs (two each of them) same everything. 2. OS running: HP-UX 11.00 The production server has to be cloned without downtime - sadly :( as I hope that they will reconsider on this one For example (like on Windows platforms), if you try to copy an entire HDD with Windows inside on another HDD, and then put that HDD on another PC it will still work, as long as the hardware is the same. Can I do something like that with a Unix system? Can I somehow COPY the contents of the entire HDD, put those on another HDD, and then just load the HDD into the other server? (If you haven't read the story the servers are exactly the same) Will it work? Can it be done with ordinary commands like cp or dump or something like that? Does any one have a similar experience? --- UPDATE: 26.01.2012 NOTE: The update is related to "The Story". If you haven't read that part then you can skip this update. This is just a short update on the recover logs from the Ignite Tape.. someone with more exp. might notice something.. ... --- READING CONTENTS OF THE IGNITE TAPE --- --- OUTPUT OMITED --- ... ... x ./configure3, 413696 bytes, 808 tape blocks x ./monitor_bpr, 20480 bytes, 40 tape blocks * Download_mini-system: Complete * Loading_software: Begin * Installing boot area on disk. * Enabling swap areas. * Backing up LVM configuration for "vg00". * Processing the archive source (Recovery Archive). * Wed Jan 25 15:27:32 EST 2012: Starting archive load of the source (Recovery Archive). * Positioning the tape (/dev/rmt/0mn). * Archive extraction from tape is beginning. Please wait. * Wed Jan 25 15:39:52 EST 2012: Completed archive load of the source (Recovery Archive). * Executing user specified script: "/opt/ignite/data/scripts/os_arch_post_l". * Running in recovery mode (os_arch_post_l). * Running the ioinit command ("/sbin/ioinit -c") * Creating device files via the insf command. insf: Installing special files for sdisk instance 0 address 0/0/1/1.15.0 insf: Installing special files for sdisk instance 1 address 0/0/2/0.1.0 insf: Installing special files for sdisk instance 2 address 0/0/2/1.15.0 insf: Installing special files for stape instance 0 address 0/0/1/0.3.0 insf: Installing special files for btlan instance 0 address 0/0/0/0 insf: Installing special files for btlan instance 1 address 0/2/0/0 insf: Installing special files for pseudo driver dlpi insf: Installing special files for pseudo driver kepd insf: Installing special files for pseudo driver framebuf insf: Installing special files for pseudo driver sad * Running "/opt/upgrade/bin/tlinstall -v" and correcting transition link permissions. * Constructing the bootconf file. * Setting primary boot path to "0/0/1/1.15.0". * Executing: "/var/adm/sw/products/PHSS_20146/pfiles/iux_postload". * Executing: "/var/adm/sw/products/PHSS_25982/pfiles/iux_postload". NOTE: tlinstall is searching filesystem - please be patient NOTE: Successfully completed * Loading_software: Complete * Build_Kernel: Begin NOTE: Since the /stand/vmunix kernel is already in place, the kernel will not be re-built. Note that no mod_kernel directives will be processed. * Build_Kernel: Complete * Boot_From_Client_Disk: Begin * Rebooting machine as expected. NOTE: Rebooting system. sync'ing disks (0 buffers to flush): 0 buffers not flushed 0 buffers still dirty Closing open logical volumes... Done Console reset done. Boot device reset done. ********** VIRTUAL FRONT PANEL ********** System Boot detected ***************************************** LEDs: RUN ATTENTION FAULT REMOTE POWER FLASH OFF OFF ON ON LED State: Running non-OS code. (i.e. Boot or Diagnostics) ... ... ... --- SERVER IS PERFORMING POST SEQUENCE HERE --- --- OUTPUT OMITED --- ... ... ... ***************************************** ************ EARLY BOOT VFP ************* End of early boot detected ***************************************** Firmware Version 43.50 Duplex Console IO Dependent Code (IODC) revision 1 ------------------------------------------------------------------------------ (c) Copyright 1995-2002, Hewlett-Packard Company, All rights reserved ------------------------------------------------------------------------------ Processor Speed State CoProcessor State Cache Size Number State Inst Data --------- -------- --------------------- ----------------- ------------ 0 650 MHz Active Functional 750 KB 1.5 MB 1 650 MHz Idle Functional 750 KB 1.5 MB Central Bus Speed (in MHz) : 120 Available Memory : 2097152 KB Good Memory Required : 16140 KB Primary boot path: 0/0/1/1.15 Alternate boot path: 0/0/2/1.15 Console path: 0/0/4/1.643 Keyboard path: 0/0/4/0.0 Processor is starting autoboot process. To discontinue, press any key within 10 seconds. 10 seconds expired. Proceeding... Trying Primary Boot Path ------------------------ Booting... Boot IO Dependent Code (IODC) revision 1 HARD Booted. ISL Revision A.00.38 OCT 26, 1994 ISL booting hpux ISL>

    Read the article

  • Database nesting layout confusion

    - by arzon
    I'm no expert in databases and a beginner in Rails, so here goes something which kinda confuses me... Assuming I have three classes as a sample (note that no effort has been made to address any possible Rails reserved words issue in the sample). class File < ActiveRecord::Base has_many :records, :dependent => :destroy accepts_nested_attributes_for :records, :allow_destroy => true end class Record < ActiveRecord::Base belongs_to :file has_many :users, :dependent => :destroy accepts_nested_attributes_for :users, :allow_destroy => true end class User < ActiveRecord::Base belongs_to :record end Upon entering records, the database contents will appear as such. My issue is that if there are a lot of Files for the same Record, there will be duplicate record names. This will also be true if there will be multiple Records for the same user in the the Users table. I was wondering if there is a better way than this so as to have one or more files point to a single Record entry and one or more Records will point to a single User. BTW, the File names are unique. Files table: id name 1 name1 2 name2 3 name3 4 name4 Records table: id file_id record_name record_type 1 1 ForDaisy1 ... 2 2 ForDonald1 ... 3 3 ForDonald2 ... 4 4 ForDaisy1 ... Users table: id record_id username 1 1 Daisy 2 2 Donald 3 3 Donald 4 4 Daisy Is there any way to optimize the database to prevent duplication of entries, or this should really the correct and proper behavior. I spread out the database into different tables to be able to easily add new columns in the future.

    Read the article

  • Is it possible to use SqlGeography with Linq to Sql?

    - by cofiem
    I've been having quite a few problems trying to use Microsoft.SqlServer.Types.SqlGeography. I know full well that support for this in Ling to Sql is not great. I've tried numerous ways, beginning with what would the expected way (Database type of geography, CLR type of SqlGeography). This produces the NotSupportedException, which is widely discussed via blogs. I've then gone down the path of treating the geography column as a varbinary(max), as geography is a UDT stored as binary. This seems to work fine (with some binary reading and writing extension methods). However, I'm now running into a rather obscure issue, which does not seem to have happened to many other people. System.InvalidCastException: Unable to cast object of type 'Microsoft.SqlServer.Types.SqlGeography' to type 'System.Byte[]'. This error is thrown from an ObjectMaterializer when iterating through a query. It seems to only occur when the tables containing geography columns are included in a query implicitly (ie. using the EntityRef<> properties to do joins). System.Data.Linq.SqlClient.ObjectReaderCompiler.ObjectReader`2.MoveNext() My question: If I'm retrieving the geography column as varbinary(max), I might expect the reverse error: can't cast byte[] to SqlGeography. That I would understand. This I don't. I do have some properies on the partial LINQ to SQL classes that hide the binary conversion... could those be the issue? Any help appreciated, and I know there's probably not enough information.

    Read the article

  • PHPMailer with GMail: SMTP Error

    - by Abs
    Hello all, I am making use of PHPMailer to send mail through GMail. The code I use is straight from a tutorial and it works perfectly on my laptop. However, testing this on a Windows 2003 Server - it seems to always return an SMPT error: SMTP Error: Could not connect to SMTP host. Mailer Error: SMTP Error: Could not connect to SMTP host. Here is the settings I use in PHPMailer: include("phpmailer/class.phpmailer.php"); $mail = new PHPMailer(); $mail->IsSMTP(); $mail->SMTPAuth = true; // enable SMTP authentication $mail->SMTPSecure = "ssl"; // use ssl $mail->Host = "smtp.gmail.com"; // GMAIL's SMTP server $mail->Port = 465; // SMTP port used by GMAIL server Can I say with confidence that this isn't a port issue, since I am connecting to another server on port 465 and it is sending mail. If not, please explain. How can I resolve this issue? Thanks all for any help

    Read the article

  • Type result with Ternary operator in C#

    - by Vaccano
    I am trying to use the ternary operator, but I am getting hung up on the type it thinks the result should be. Below is an example that I have contrived to show the issue I am having: class Program { public static void OutputDateTime(DateTime? datetime) { Console.WriteLine(datetime); } public static bool IsDateTimeHappy(DateTime datetime) { if (DateTime.Compare(datetime, DateTime.Parse("1/1")) == 0) return true; return false; } static void Main(string[] args) { DateTime myDateTime = DateTime.Now; OutputDateTime(IsDateTimeHappy(myDateTime) ? null : myDateTime); Console.ReadLine(); ^ } | } | // This line has the compile issue ---------------+ On the line indicated above, I get the following compile error: Type of conditional expression cannot be determined because there is no implicit conversion between '< null ' and 'System.DateTime' I am confused because the parameter is a nullable type (DateTime?). Why does it need to convert at all? If it is null then use that, if it is a date time then use that. I was under the impression that: condition ? first_expression : second_expression; was the same as: if (condition) first_expression; else second_expression; Clearly this is not the case. What is the reasoning behind this? (NOTE: I know that if I make "myDateTime" a nullable DateTime then it will work. But why does it need it? As I stated earlier this is a contrived example. In my real example "myDateTime" is a data mapped value that cannot be made nullable.)

    Read the article

  • [Cocoa] Core Animation with an NSView and subviews

    - by ndg
    I've subclassed NSView to create a 'container' view (which I've called TRTransitionView) which is being used to house two subviews. At the click of a button, I want to transition one subview out of the parent view and transition the other in, using the Core Animation transition type: kCATransitionPush. For the most part, I have this working as you'd expect (here's a basic test project I threw together). The issue I'm seeing relates to resizing my window and then toggling between my two views. After resizing a window, my subviews will appear at seemingly random locations within my TRTransitionView. Additionally, it appears as if the TRTransitionView hasn't stretched correctly and is clipping the contents of its subviews. Ideally, I would like subviews anchored to the top-left of their parent view at all times, and to also grow to expand the size of the parent view. The second issue relates to an NSTableView I've placed in my first subview. When my window is resized, and my TRTransitionView resizes to match its new dimensions, my TableView seems to resize its content quite awkwardly (the entire table seems to jolt around) and the newly expanded space that the table now occupies seems to 'flash' (as if in the process of being animated). Extremely difficult to describe, but is there any way to stop this? Here's my TRTransitionView class: -(void) awakeFromNib { [self setWantsLayer:YES]; [self addSubview:[self currentView]]; transition = [CATransition animation]; [transition setType:kCATransitionPush]; [transition setSubtype:kCATransitionFromLeft]; [self setAnimations: [NSDictionary dictionaryWithObject:transition forKey:@"subviews"]]; } - (void)setCurrentView:(NSView*)newView { if (!currentView) { currentView = newView; return; } [[self animator] replaceSubview:currentView with:newView]; currentView = newView; } -(IBAction) switchToViewOne:(id)sender { [transition setSubtype:kCATransitionFromLeft]; [self setCurrentView:viewOne]; } -(IBAction) switchToViewTwo:(id)sender { [transition setSubtype:kCATransitionFromRight]; [self setCurrentView:viewTwo]; }

    Read the article

  • the commands ls and get of ftp are not working in vmware

    - by mnish
    Hi, Iam using vmware player version 3.1 to boot a minix 3 os image. After booting the minix os I want to get some files from a server using ftp. the ftp connection to the server works but when i use the commands "ls" or "get" nothing happens except it says "200 PORT command successful" and it hanges in there. The only thing i can do after typing ls+enter or get+enter is to exit the ftp by using ctrl+c. If anyone knows a solution to this? please help. Thank you

    Read the article

  • Database design MySQL using foreign keys

    - by dscher
    I'm having some a little trouble understanding how to handle the database end of a program I'm making. I'm using an ORM in Kohana, but am hoping that a generalized understanding of how to solve this issue will lead me to an answer with the ORM. I'm writing a program for users to manage their stock research information. My tables are basically like so: CREATE TABLE tags( id INT AUTO_INCREMENT NOT NULL PRIMARY KEY, tags VARCHAR(30), UNIQUE(tags) ) ENGINE=INNODB DEFAULT CHARSET=utf8; CREATE TABLE stock_tags( id INT AUTO_INCREMENT NOT NULL PRIMARY KEY, tag_id INT NOT NULL, stock_id INT NOT NULL, FOREIGN KEY (tag_id) REFERENCES tags(id), FOREIGN KEY(stock_id) REFERENCES stocks(id) ON DELETE CASCADE ) ENGINE=INNODB DEFAULT CHARSET=utf8; CREATE TABLE notes( id INT AUTO_INCREMENT NOT NULL, stock_id INT NOT NULL, notes TEXT NOT NULL, FOREIGN KEY (stock_id) REFERENCES stocks(id) ON DELETE CASCADE, PRIMARY KEY(id) ) ENGINE=INNODB DEFAULT CHARSET=utf8; CREATE TABLE links( id INT AUTO_INCREMENT NOT NULL, stock_id INT NOT NULL, links VARCHAR(2083) NOT NULL, FOREIGN KEY (stock_id) REFERENCES stocks(id) ON DELETE CASCADE, PRIMARY KEY(id) ) ENGINE=INNODB DEFAULT CHARSET=utf8; How would I get all the attributes of a single stock, including its links, notes, and tags? Do I have to add links, notes, and tags columns to the stocks table and then how do you call it? I know this differs using an ORM and I'd assume that I can use join tables in SQL. Thanks for any help, this will really help me understand the issue a lot better.

    Read the article

  • Javascript Callback when variable is set to X

    - by Erik
    Hey everyone, Have an issue I can't seem to wrap my head around. I'm wanting to write a generic javascript function that will accept a variable and a callback, and continue to execute until that variable is something other than false. For example, the variable SpeedFeed.user.sid is false until something else happens in the code, but I don't want to execute a particular callback until it has been set. The call: SpeedFeed.helper_ready(SpeedFeed.user.sid, function(){ alert(SpeedFeed.user.sid); // Run function that requires sid to be set. }); The function: helper_ready: function(vtrue, callback){ if(vtrue != false){ callback(); } else { setTimeout(function(){ SpeedFeed.helper_ready(vtrue, callback); }, SpeedFeed.apiCheckTime); } } The issue I've narrowed it down to appears to be that because in the setTimeout I call vtrue instead of the actual SpeedFeed.user.sid, it's going to be set to false always. I realize I could write a specific function for each time that just evaluates the SpeedFeed.user.sid, but I'd like to have a generic method that I could use throughout the application. Thanks for any insight :)

    Read the article

  • jQuery e.target bubbling best practice ?

    - by olouv
    In my heavy-ajax code, i always bind "click" the body tag & act depending on $(e.target) & using $.fn.hasClass(). However when i click on an anchor that has a </span> tag inside, my $(e.target) equals this node, and not the parent anchor as i would like it to. From now on, i have used this trick (var $t = $(e.target);) : /** bubbling **/ if($t.get(0).tagName !== "A" && $t.get(0).tagName !== "AREA") { $t = $t.parent("a"); if(empty($t)) return true; //else console.log("$t.bubble()", $t); } It feels wrong somehow... Do you have any better implementation ? $.fn.live() does not solve my issue as it still returns the span as the target. Moreover i'm looking for speed (running on atom-based touch devices) & live appeared to be way slower (twice) : http://jsperf.com/bind-vs-click/3 In fact, as @Guffa pointed, using $.fn.live() solves the span bubbling issue as i don't need the event.target anymore. I guess there is no other "right" answer here (using bind).

    Read the article

  • What is the best way to store site configuration data?

    - by DaveDev
    I have a question about storing site configuration data. We have a platform for web applications. The idea is that different clients can have their data hosted and displayed on their own site which sits on top of this platform. Each site has a configuration which determines which panels relevant to the client appear on which pages. The system was originally designed to keep all the configuration data for each site in a database. When the site is loaded all the configuration data is loaded into a SiteConfiguration object, and the clients panels are generated based on the content of this object. This works, but I find it very difficult to work with to apply change requests or add new sites because there is so much data to sift through and it's difficult maintain a mental model of the site and its configuration. Recently I've been tasked with developing a subset of some of the sites to be generated as PDF documents for printing. I decided to take a different approach to how I would define the configuration in that instead of storing configuration data in the database, I wrote XML files to contain the data. I find it much easier to work with because instead of reading meaningless rows of data which are related to other meaningless rows of data, I have meaningful documents with semantic, readable information with the relationships defined by visually understandable element nesting. So now with these 2 approaches to storing site configuration data, I'd like to get the opinions of people more experienced in dealing with this issue on dealing with these two approaches. What is the best way of storing site configuration data? Is there a better way than the two ways I outlined here? note: StackOverflow is telling me the question appears to be subjective and is likely to be closed. I'm not trying to be subjective. I'd like to know how best to approach this issue next time and if people with industry experience on this could provide some input.

    Read the article

  • FogBugz On Demand + online source control at low/no cost?

    - by quux
    I have a project in the free hosted FogBugz On Demand (FOD) product right now. This is great for feature/issue tracking. But I've been working from a codebase that is solely on my development machine. I'd like to collaborate with another guy who is thousands of miles from me. So we need a source control solution (SCM)! I use Visual Studio (2005, but can upgrade to later versions as needed). I am aware that FogBugz can integrate with a number of source control systems. So now the question is: which online SCM products can integrate well with FOD and VS? And which ones do so well at low or no cost, for a small code repository. And where might I find a proven recipe for putting this together. I'm open to other solutions which provide the same functionality. Please don't suggest Trac - I regard it highly, but I want the features of FOB (especially the evidence based scheduling) in my issue tracking solution. So really, I need to combine FOB + VS + some online SCM product into a low or no cost solution for two coders to collaborate on.

    Read the article

  • Full complete MySQL database replication? Ideas? What do people do?

    - by mauriciopastrana
    Currently I have two Linux servers running MySQL, one sitting on a rack right next to me under a 10 Mbit/s upload pipe (main server) and another some couple of miles away on a 3 Mbit/s upload pipe (mirror). I want to be able to replicate data on both servers continuously, but have run into several roadblocks. One of them being, under MySQL master/slave configurations, every now and then, some statements drop (!), meaning; some people logging on to the mirror URL don't see data that I know is on the main server and vice versa. Let's say this happens on a meaningful block of data once every month, so I can live with it and assume it's a "lost packet" issue (i.e., god knows, but we'll compensate). The other most important (and annoying) recurring issue is that, when for some reason we do a major upload or update (or reboot) on one end and have to sever the link, then LOAD DATA FROM MASTER doesn't work and I have to manually dump on one end and upload on the other, quite a task nowadays moving some .5 TB worth of data. Is there software for this? I know MySQL (the "corporation") offers this as a VERY expensive service (full database replication). I am just wondering what people out there do. The way it's structured, we run an automatic failover where if one server is not up, then the main URL just resolves to the other server.

    Read the article

  • Dealing with Anti-Microsoft Trolls on The Internet

    - by FlySwat
    I'm an active member on Programming Reddit, but I'm one of the few C# advocates there. I could write up a 3 paragraph explanation of how to do something there, just to have it voted into the negatives because I used C# as an example. As a developer using the "Microsoft Stack", how do you handle the trolls and bigots in the online world? These are the kind of people who say things like "M$", or that Vista sucks without ever booting up. Do you just ignore the trolls?

    Read the article

  • Technology and language for a stable Digital Audio Workstation development

    - by Kill KRT
    Hi, I'm designing a cross platform (Windows/Linux/OS X) application, something like a digital audio workstation. I'd like to create a software where users have a fully featured sequencer (multiple tracks with automation) and where it is possible to create instruments using a visual language (as Pure Data/Max MSP). Ehm... I know that I've already posted a question about a related issue... But in order to decide which technology I should use, I think I'd better to make more investigation. I'm a quite experted user of audio trackers (Renoise, Protracker,...) and sequencers (FL Studio, Cubase 5), but I didn't ever try to develop even a basic audio tracker. I know just the basic theory of mixing sound and know how basically a DSP works. My questions are: Where I can find a good tutorial/guide/book about this issue? Do you think using C# (with NAudio) could dramatically reduce performance? I know C++ would be the best choice, but I find C# so elegant and easy to build and port, while C++ is so powerful and fast, but there are too #define and bad things for my taste! ;-) Thank you.

    Read the article

  • PAYPAL IPN Response Problem

    - by Gorkem Tolan
    I am having a problem with Paypal IPN response. After payment is made by the customer, paypal ipn returns this url www.mywebsite.com?orderid=32&tx=2AC67201DL3533325&st=Pending&amt=2.50&cc=USD&cm=&item_number=32 There are a couple of issues 1- Postback field names are undefined or missing. Thus I can get the INVALID message. I am not sure if my website does not read POST variables. When I looked at IPN history, it shows that each IPN has been sent with the complete url. 2- Payment status keeps coming Pending. Does this issue cause the first issue? Thank you for your responses in advance. Here is the code: Dim strSandbox As String, strLive As String Dim req As HttpWebRequest strSandbox = "http://www.sandbox.paypal.com/cgi-bin/webscr/" strLive = "https://www.paypal.com/cgi-bin/webscr" req = CType(WebRequest.Create(strSandbox), HttpWebRequest) 'Set values for the request back req.Method = "POST" req.ContentType = "application/x-www-form-urlencoded" Dim param() As Byte param = Request.BinaryRead(HttpContext.Current.Request.ContentLength) Dim strRequest As String strRequest = Encoding.ASCII.GetString(param) strRequest = strRequest & "&cmd=_notify-validate" req.ContentLength = strRequest.Length 'Response.Write(strRequest) 'Send the request to PayPal and get the response Dim streamOut As StreamWriter streamOut = New StreamWriter(req.GetRequestStream(), System.Text.Encoding.ASCII) streamOut.Write(strRequest) streamOut.Close() Dim streamIn As StreamReader streamIn = New StreamReader(req.GetResponse().GetResponseStream()) Dim strResponse As String strResponse = streamIn.ReadToEnd() Response.Write(strResponse) streamIn.Close() If (strResponse = "VERIFIED") Then Response.Redirect("thankyou.aspx") ElseIf (strResponse = "INVALID") Then End If

    Read the article

  • What can cause my code to run slower when the server JIT is activated?

    - by durandai
    I am doing some optimizations on an MPEG decoder. To ensure my optimizations aren't breaking anything I have a test suite that benchmarks the entire codebase (both optimized and original) as well as verifying that they both produce identical results (basically just feeding a couple of different streams through the decoder and crc32 the outputs). When using the "-server" option with the Sun 1.6.0_18, the test suite runs about 12% slower on the optimized version after warmup (in comparison to the default "-client" setting), while the original codebase gains a good boost running about twice as fast as in client mode. While at first this seemed to be simply a warmup issue to me, I added a loop to repeat the entire test suite multiple times. Then execution times become constant for each pass starting at the 3rd iteration of the test, still the optimized version stays 12% slower than in the client mode. I am also pretty sure its not a garbage collection issue, since the code involves absolutely no object allocations after startup. The code consists mainly of some bit manipulation operations (stream decoding) and lots of basic floating math (generating PCM audio). The only JDK classes involved are ByteArrayInputStream (feeds the stream to the test and excluding disk IO from the tests) and CRC32 (to verify the result). I also observed the same behaviour with Sun JDK 1.7.0_b98 (only that ist 15% instead of 12% there). Oh, and the tests were all done on the same machine (single core) with no other applications running (WinXP). While there is some inevitable variation on the measured execution times (using System.nanoTime btw), the variation between different test runs with the same settings never exceeded 2%, usually less than 1% (after warmup), so I conclude the effect is real and not purely induced by the measuring mechanism/machine. Are there any known coding patterns that perform worse on the server JIT? Failing that, what options are available to "peek" under the hood and observe what the JIT is doing there?

    Read the article

  • MS Word opens documents hosted on WebDav share read-only on Windows Vista and 7 but only if no other

    - by rjmunro
    We have a WebDav server with some Word documents on it. (We are using PHP's HTTP_WebDAV_Server but get the same issue on tests with Apache mod_dav - both use digest authentication, basic auth doesn't work on Vista or later) We have a web page that opens the word documents using javascript like: Doc = new ActiveXObject("Sharepoint.OpenDocuments.3"); Doc.EditDocument(url, 'Word.Document'); which causes word to connect to the webdav server and open the document, bypassing IE and most of windows built in WebDav client. On Windows XP, this works perfectly, and (after prompting you to log in) allows you to edit the word document and save it back to the server. On Windows 7 and Windows Vista, this usually opens the document read only, but not in all cases. After quite a bit of trial and error, we found that it worked (i.e. opened read/write) if Explorer happened to be already connected to a WebDav server. Note that this works with any Webdav server, not neccesarily the one with the document that you are trying to edit. So other than telling our users to change settings on their machine, is there anything we can do in the javascript sharepoint call, or on the WebDav server that will fix this issue. Ps. We have the same problem when launching Word from an HTA file version of our system, with javascript like: wordApp = new ActiveXObject("Word.application"); wordApp = new ActiveXObject("Word.application"); wordApp.Visible = true; doc = wordApp.Documents.Open(url); Pps. Sorry if you think this question should be on Serverfault (or even SuperUser). I couldn't decide, but because we are programming the WebDav server ourself (in PHP) and I have more rep on this site than the others, I decided to post it here :-)

    Read the article

  • Can this code cause a "500" internal server error ?

    - by Scott B
    A few of my customers are reporting that they are getting "500" Internal Server errors lately. I believe it might be caused by various plugins they are using but each time, the hosting company (multiple hosts) are saying that the htaccess file had to be replaced to fix the issue. I'm submitting the code below from my custom theme because its the only place where I trigger an htaccess write. And I want to be sure that there are no problems here that could cause an issue that might contribute to the 500 errors... if (file_exists(ABSPATH.'/wp-admin/includes/taxonomy.php')) { require_once(ABSPATH.'/wp-admin/includes/taxonomy.php'); if(get_option('permalink_structure') !== "/%postname%/" || get_option('mycustomtheme_permalinks') !=="/%postname%/") { $mycustomtheme_permalinks = get_option('mycustomtheme_permalinks'); require_once(ABSPATH . '/wp-admin/includes/misc.php'); require_once(ABSPATH . '/wp-admin/includes/file.php'); global $wp_rewrite; $wp_rewrite->set_permalink_structure($mycustomtheme_permalinks); $wp_rewrite->flush_rules(); } if(!get_cat_ID('topMenu')){wp_create_category('topMenu');} if(!get_cat_ID('hidden')){wp_create_category('hidden');} if(!get_cat_ID('noads')){wp_create_category('noads');} } if (!is_dir(ABSPATH.'wp-content/uploads')) { mkdir(ABSPATH.'wp-content/uploads'); }

    Read the article

  • Querying a 3rd party website's database from my website

    - by Mong134
    The Goal: To retrieve information from a 3rd party database based off of a user's query on my ASP.NET website The Details: I need to be able to search 3rd-party websites for information relating to pharmaceutical drugs. Basically, here's what I've been tasked with: a user starts entering the name of a drug they're using in their experiments, and while they're typing a 3rd party website (e.g., here or here) is queried and suggestions are made based based off of what they've typed. Once they've made a selection, certain properties (molecular weight, chemical structure, etc) are retrieved from the 3rd party database and stored in our database. PharmaGKB.org's search bar is pretty much what I need to implement, but I need to access a 3rd party db. The site that I'm working on is ASP.NET/C#. The Problem: I don't really know where to start with this. There's a downloadable Perl example at the bottom of the page here, but it didn't really help me all that much. I'm at a loss as to how to implement this, or even find information about how to do it. The AJAX toolkit was suggested, but I'm not sure if that will solve the issue. JavaScript is also being considered, but again, I'm not sure if that will be sufficient, either. Perl Example Connection As a mentioned, here is a snippet from the Perl example given on the Pharmgkb.org site: my $call = SOAP::Lite -> readable (1) -> uri('SearchService') -> proxy('http://www.pharmgkb.org/services/SearchService') -> search ($ARGV[0]); However, I'm not sure how to implement this is C#/ASP.NET/JavaScript. There's a question on Stack Overflow about embedding Perl in C#, but it require a C wrapper as well, and I don't think that three languages is necessary or wise to solve this issue.

    Read the article

  • VirtualBox limits size of .js file, that can be included from guest additions folder?

    - by c69
    This question might belong to SuperUser, but i'll try to ask it here anyway, because i believe, some web developers might encountered this weird behavior. When testing a site for IE8/winXP compatibility on VirtualBox i run into weird issue of $ is undefined, which is caused by jQuery (and jQuery UI) being not included, when referenced by relative path, which resolves to file:/// url. Seemingly because their size was too big (above 200KB). Simply replacing links to those 2 big files to http:// ones solved the issue for me. But here is the question: why did this happen ? is it a misconfiguration ? a bug ? a known design decision ? Details: VirtualBox 4.1.8 host os: win7 64bit, guest os: xp sp3 32 bit guest additions installed, page was launched from VB shared folder the bug was manifesting itself in all browsers (even in opera, which ignores ie security settings, afaik) ie configuration is default script was included like this: <script type="text/javascript" src="js/libs/jquery/jquery-1.7.2.js"> exact size limit was not deducted.

    Read the article

< Previous Page | 313 314 315 316 317 318 319 320 321 322 323 324  | Next Page >