Search Results

Search found 63341 results on 2534 pages for 'pillar data systems'.

Page 494/2534 | < Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >

  • Safari/Chrome problem with ajaxsubmit?

    - by Jan
    Hi I'm currently having some weird issues with ajaxsubmit (http://jquery.malsup.com/form/#ajaxSubmit), which I'm currently using in a project. I have a flow where I need to open a form into a modal window. I'm using fancybox for that and it works like a charm. When the form has been forced to open in the fancybox window there can happen two things. 1) If the user who is about to submit the form is logged in she should see a confirmation in the modal box, that her input was succesfully submitted 2)If the user is not logged in there should be loaded a login form once she hits the submit button 2.1) When the user has logged in she should receive a confirmation in the modal box. This is also working like a charm in Firefox, IE8 and IE7 but not in Safari or Chrome. The weird part is that it seems like safari and chrome are completely ignoring my ajaxsubmit form. To force the first form to be opened I use the follwoing script - this part is working in both Safari and Chrome. $(".klikEnPrisForm").ajaxForm({ success: function(data){ $.fancybox({'content':data}); } }); My ajaxsubmit form scrip looks like this var options = { url: '/?altTemplate=XmlProxyKlikEnPris', dataType: 'xml', data: $(this).serializeArray(), success: function(data) { if ($(data).find('loggetind').text() == 'true') { $("#klikenpris").hide(); $('<div id="fancybox-inner-klik"></div>').appendTo('#fancybox-inner'); $('#fancybox-inner-klik').load('/KlikEnPrisAccept?tilKvittering=1&sagsno=' + $(data).find('sagsnummer').text() + '&pris=' + $(data).find('pris').text() + '&klik-comment=' + $(data).find('kommentar').text() + '&klik-telefon=' + $(data).find('tlf').text() + '&klik-maeglerkontakt=' + $(data).find('maakontakte').text()).stop(true, true); } else { $("#klikenpris").hide(); $("#fancybox-wrap").css({ 'width': '480px', 'height': '220px' }); $("#fancybox-inner").css({ 'width': '460px', 'height': '220px' }); $('<div id="fancybox-inner-klik"></div>').appendTo('#fancybox-inner'); $('#fancybox-inner-klik').load('/login.aspx?loginklikpris=0&klikpris=1&sagsno=' + $(data).find('sagsnummer').text() + '&pris=' + $(data).find('pris').text() + '&klik-comment=' + $(data).find('kommentar').text() + '&klik-telefon=' + $(data).find('tlf').text() + '&klik-maeglerkontakt=' + $(data).find('maakontakte').text()).stop(true, true); } } }; // bind to the form's submit event $('#klikenprisform').submit(function() { // inside event callbacks 'this' is the DOM element so we first // wrap it in a jQuery object and then invoke ajaxSubmit $(this).ajaxSubmit(options); // !!! Important !!! // always return false to prevent standard browser submit and page navigation return false; }); I have tried inserting an alert in the succes callback function but it's never being called it seems. It seems like the default action is not being overruled by the link written in the "url" in ajaxsubmit. I'm really puzzled about this, since it's working nicely in other browsers and I'm completely lost on how I should approach the debugging in safari/chrome. I hope all the above makes sense and I'm looking forward to hear any suggestions. Cheers!

    Read the article

  • ACORD LOMA Session Highlights Policy Administration Trends

    - by [email protected]
    Helen Pitts, senior product marketing manager for Oracle Insurance, attended and is blogging from the ACORD LOMA Insurance Forum this week. Above: Paul Vancheri, Chief Information Officer, Fidelity Investments Life Insurance Company. Vancheri gave a presentation during the ACORD LOMA Insurance Systems Forum about the key elements of modern policy administration systems and how insurers can mitigate risk during legacy system migrations to safely introduce new technologies. When I had a few particularly challenging honors courses in college my father, a long-time technology industry veteran, used to say, "If you don't know how to do something go ask the experts. Find someone who has been there and done that, don't be afraid to ask the tough questions, and apply and build upon what you learn." (Actually he still offers this same advice today.) That's probably why my favorite sessions at industry events, like the ACORD LOMA Insurance Forum this week, are those that include insight on industry trends and case studies from carriers who share their experiences and offer best practices based upon their own lessons learned. I had the opportunity to attend a particularly insightful session Wednesday as Craig Weber, senior vice president of Celent's Insurance practice, and Paul Vancheri, CIO of Fidelity Life Investments, presented, "Managing the Dynamic Insurance Landscape: Enabling Growth and Profitability with a Modern Policy Administration System." Policy Administration Trends Growing the business is the top issue when it comes to IT among both life and annuity and property and casualty carriers according to Weber. To drive growth and capture market share from competitors, carriers are looking to modernize their core insurance systems, with 65 percent of those CIOs participating in recent Celent research citing plans to replace their policy administration systems. Weber noted that there has been continued focus and investment, particularly in the last three years, by software and technology vendors to offer modern, rules-based, configurable policy administration solutions. He added that these solutions are continuing to evolve with the ongoing aim of helping carriers rapidly meet shifting business needs--whether it is to launch new products to market faster than the competition, adapt existing products to meet shifting consumer and /or regulatory demands, or to exit unprofitable markets. He closed by noting the top four trends for policy administration either in the process of being adopted today or on the not-so-distant horizon for the future: Underwriting and service desktops New business automation Convergence of ultra-configurable and domain content-rich systems Better usability and screen design Mitigating the Risk When Making the Decision to Modernize Third-party analyst research from advisory firms like Celent was a key part of the due diligence process for Fidelity as it sought a replacement for its legacy policy administration system back in 2005, according to Vancheri. The company's business opportunities were outrunning system capability. Its legacy system had not been upgraded in several years and was deficient from a functionality and currency standpoint. This was constraining the carrier's ability to rapidly configure and bring new and complex products to market. The company sought a new, modern policy administration system, one that would enable it to keep pace with rapid and often unexpected industry changes and ahead of the competition. A cross-functional team that included representatives from finance, actuarial, operations, client services and IT conducted an extensive selection process. This process included deep documentation review, pilot evaluations, demonstrations of required functionality and complex problem-solving, infrastructure integration capability, and the ability to meet the company's desired cost model. The company ultimately selected an adaptive policy administration system that met its requirements to: Deliver ease of use - eliminating paper and rework, while easing the burden on representatives to sell and service annuities Provide customer parity - offering Web-based capabilities in alignment with the company's focus on delivering a consistent customer experience across its business Deliver scalability, efficiency - enabling automation, while simplifying and standardizing systems across its technology stack Offer desired functionality - supporting Fidelity's product configuration / rules management philosophy, focus on customer service and technology upgrade requirements Meet cost requirements - including implementation, professional services and licenses fees and ongoing maintenance Deliver upon business requirements - enabling the ability to drive time to market for new products and flexibility to make changes Best Practices for Addressing Implementation Challenges Based upon lessons learned during the company's implementation, Vancheri advised carriers to evaluate staffing capabilities and cultural impacts, review business requirements to avoid rebuilding legacy processes, factor in dependent systems, and review policies and practices to secure customer data. His formula for success: upfront planning + clear requirements = precision execution. Achieving a Return on Investment Vancheri said the decision to replace their legacy policy administration system and deploy a modern, rules-based system--before the economic downturn occurred--has been integral in helping the company adapt to shifting market conditions, while enabling growth in its direct channel sales of variable annuities. Since deploying its new policy admin system, the company has reduced its average time to market for new products from 12-15 months to 4.5 months. The company has since migrated its other products to the new system and retired its legacy system, significantly decreasing its overall product development cycle. From a processing standpoint Vancheri noted the company has achieved gains in automation, information, and ease of use, resulting in improved real-time data edits, controls for better quality, and tax handling capability. Plus, with by having only one platform to manage, the company has simplified its IT environment and is well positioned to deliver system enhancements for greater efficiencies. Commitment to Continuing the Investment In the short and longer term future Vancheri said the company plans to enhance business functionality to support money movement, wire automation, divorce processing on payout contracts and cost-based tracking improvements. It also plans to continue system upgrades to remain current as well as focus on further reducing cycle time, driving down maintenance costs, and integrating with other products. Helen Pitts is senior product marketing manager for Oracle Insurance focused on life/annuities and enterprise document automation.

    Read the article

  • how do i load a csv file in rails from a migrate usiing load data local infile ?

    - by Chris Drappier
    Hi All, I have my csv file in my public folder, and i'm trying to load it from a migration, but I get a file not found error using this script : ActiveRecord::Base.connection.execute( "load data local infile '#{RAILS_ROOT}/public/muds_variables.csv' into table muds_variables " + "fields terminated by ',' " + "lines terminated by '\n' " + "(variable_name, definition)") I've checked and re-checked the file path, and that's definitely where it lives, I've also tried it just using the file name without any of the path, and a few other combos, but I can't make it work :(. can anyone help me out with this? here's the error : Mysql::Error: File '/home/chris/rails_projects/muds/public/muds_variables.csv' not found (Errcode: 2): load data local infile '/home/chris/rails_projects/muds/public/muds_variables.csv' into table muds_variables fields terminated by ',' lines terminated by ' ' (variable_name, definition) -C

    Read the article

  • How to send EML data in chuck to Google Apps Mail using Google API ver 2 ?

    - by Preeti
    Hi, I am migrating EML mails to Google Apps. When i try to Migrate a EML file with two attachment 2.1 MB and 1.96 MB. It is throwing exception: "The request was aborted: The request was canceled." I am using below code: MailItemEntry[] entries = new MailItemEntry[1]; String msg = File.ReadAllText(EmlPath); entries[0] = new MailItemEntry(); entries[0].Rfc822Msg = new Rfc822MsgElement(msg); ........ MailItemFeed feed = mailItemService.Batch(domain, UserName, entries); I think sending data can resolve this issue.So,how can send this EML data in chunk to Google Apps? Thanx

    Read the article

  • node.js / socket.io, cookies only working locally

    - by Ben Griffiths
    I'm trying to use cookie based sessions, however it'll only work on the local machine, not over the network. If I remove the session related stuff, it will however work just great over the network... You'll have to forgive the lack of quality code here, I'm just starting out with node/socket etc etc, and finding any clear guides is tough going, so I'm in n00b territory right now. Basically this is so far hacked together from various snippets with about 10% understanding of what I'm actually doing... The error I see in Chrome is: socket.io.js:1632GET http://192.168.0.6:8080/socket.io/1/?t=1334431940273 500 (Internal Server Error) Socket.handshake ------- socket.io.js:1632 Socket.connect ------- socket.io.js:1671 Socket ------- socket.io.js:1530 io.connect ------- socket.io.js:91 (anonymous function) ------- /socket-test/:9 jQuery.extend.ready ------- jquery.js:438 And in the console for the server I see: debug - served static content /socket.io.js debug - authorized warn - handshake error No cookie My server is: var express = require('express') , app = express.createServer() , io = require('socket.io').listen(app) , connect = require('express/node_modules/connect') , parseCookie = connect.utils.parseCookie , RedisStore = require('connect-redis')(express) , sessionStore = new RedisStore(); app.listen(8080, '192.168.0.6'); app.configure(function() { app.use(express.cookieParser()); app.use(express.session( { secret: 'YOURSOOPERSEKRITKEY', store: sessionStore })); }); io.configure(function() { io.set('authorization', function(data, callback) { if(data.headers.cookie) { var cookie = parseCookie(data.headers.cookie); sessionStore.get(cookie['connect.sid'], function(err, session) { if(err || !session) { callback('Error', false); } else { data.session = session; callback(null, true); } }); } else { callback('No cookie', false); } }); }); var users_count = 0; io.sockets.on('connection', function (socket) { console.log('New Connection'); var session = socket.handshake.session; ++users_count; io.sockets.emit('users_count', users_count); socket.on('something', function(data) { io.sockets.emit('doing_something', data['data']); }); socket.on('disconnect', function() { --users_count; io.sockets.emit('users_count', users_count); }); }); My page JS is: jQuery(function($){ var socket = io.connect('http://192.168.0.6', { port: 8080 } ); socket.on('users_count', function(data) { $('#client_count').text(data); }); socket.on('doing_something', function(data) { if(data == '') { window.setTimeout(function() { $('#target').text(data); }, 3000); } else { $('#target').text(data); } }); $('#textbox').keydown(function() { socket.emit('something', { data: 'typing' }); }); $('#textbox').keyup(function() { socket.emit('something', { data: '' }); }); });

    Read the article

  • Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.

    - by Paul J. Warner
    I am having an issue with a program where after 6 mins +- 5 secs we get the above exception. Some more info about the exception stacktrace is below. This all happens pretty religiously, 6 mins goes by and bam the following 3 exeptions. We have the application installed in 2 other environments and it is working fine there. I am hoping to find some server settings either IIS 6 or Server 2003 settings that may be causing this issue to occur. I have reviewed some of the similar questions and don't see very many answers. I am hoping that maybe the information I have provided may help a little bit. 208741,Exception,,,,2011-06-21 00:30:14.193,SERVERNAME,2624,1,CLIENTNAME,The underlying connection was closed: An unexpected error occurred on a receive. , at System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request) at Microsoft.Web.Services3.WebServicesClientProtocol.GetResponse(WebRequest request, IAsyncResult result) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count) at System.Net.Security._SslStream.StartFrameHeader(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.StartReading(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.ProcessRead(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.TlsStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead),2004437127,114,1 208742,Exception,,,,2011-06-21 00:30:14.227,SERVERNAME,2624,1,CLIENTNAME,Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. , at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count) at System.Net.Security._SslStream.StartFrameHeader(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.StartReading(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.ProcessRead(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.TlsStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead),2004437127,114,1 208743,Exception,,,,2011-06-21 00:30:14.287,SERVERNAME,2624,1,CLIENTNAME,An existing connection was forcibly closed by the remote host , at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size),-691097507,62,1

    Read the article

  • In WPF how to define a Data template in case of enum?

    - by Ashish Ashu
    I have a Enum defined as Type public Enum Type { OneType, TwoType, ThreeType }; Now I bind Type to a drop down Ribbon Control Drop Down Menu in a Ribbon Control that displays each menu with a MenuName with corresponding Image. ( I am using Syncfusion Ribbon Control ). I want that each enum type like ( OneType ) has data template defined that has Name of the menu and corrospending image. How can I define the data template of enum ? Please suggest me the solution, if this is possible !! Please also tell me if its not possible or I am thinking in the wrong direction !!

    Read the article

  • JDBC connections: How to specify the port for data-transfer?

    - by LeO
    I wanto to run my JDBC-connection (either Oracle or MSSQL) through a proxy-server. Reason for this is to have additional controls of the traffic, especially for developing. I know, I could specify the proxy, which runs on my machine, and the port in the connection-string. But the specified connection-settings are only taken as some kind of handshake to agree on which port the data is finally transferred. And this is defenitly not the port which I have under proxy-control. So, does anybody have an idea, how to specify the port for the data-transfer? I would prefer if this could be done in the connection-string. The same issue applies for Oracle and MSSQL. Thx LeO

    Read the article

  • Does this schema sound better suited for a document-oriented data store or relational?

    - by Blaine LaFreniere
    Disclaimer: let me know if this question is better suited for serverfault.com I want to store information on music, specifically: genres artists albums songs This information will be used in a web application, and I want people to be able to see all of the songs associated to an album, and albums associated to an artist, and artists associated to a genre. I'm currently using MySQL, but before I make a decision to switch I want to know: How easy is scaling horizontally? Is it easier to manage than an SQL based solution? Would the above data I want to store be too hard to do schema-free? When I think association, I immediately think RDBMSs; can data be stored in something like CouchDB but still have some kind of association as stated above?

    Read the article

  • Limit the amount of data that can be stored in a folder on Ubuntu Server 12.04?

    - by dougoftheabaci
    I'm in the process of building my first server. It's up, it's running, I'm transferring copious amounts of data away from my horrid little Drobo (DO NOT BUY ONE OF THESE, EVER). However, there's one thing I have yet to do: I'd like to set it up for Time Machine backups as well. I've seen all the guides and I have some idea of how to set the whole thing up, but the issue is that Time Machine will just fill up as much space as you let it. So if I let it lose in my 8 TB zpool it'll slowly consume every last available sector. This, of course, is not acceptable. I have a folder at the root of my zpool called "ZFS Time Machine" and I would like to limit it to 1 TB (all I need for backup purposes). However, I have no idea how to do that. Is this possible? I can continue using a small external hard drive attached via FW800 if I have to but I'd much rather prefer putting everything on my server.

    Read the article

  • BrowserField or CustomControls? What is the best to use when submitting and fetching data from web?

    - by SIA
    Hi Everybody, I am unable to decide whether what to use for my blackberry application. I am developing an application for Blackberry Device. This application send and recieves data from website. Thats the only functionality. I wanted to know what the best approach to go with. Shall i use BrowserField and display html in the application?? OR Shall i develop the custom controls and update the UI with the data fetched from the web?? Please Suggest, advice. thanks SIA

    Read the article

  • How to get a clipboard paste notification and provide my own data?

    - by Uwe Keim
    For a small utility I am writing (.NET, C#), I want to monitor clipboard copy operations and clipboard paste operations. My idea is to provide my own data when pasting into an arbitrary application. The monitoring of a copy operation can be easily done by using a clipboard viewer. Something that seems much more advanced to me is to write a "clipboard paste provider": Answer to "what formats are available" queries of applications. Supply data to application paste operations. I found this posting and this posting, but none of them seems to really help me. What I guess is that I somehow have to mimic/hijack the current clipboard. Question: Is it possible to "wrap" the clipboard in terms of paste operations and provide my own kind of "clipboard proxy"? Thanks Uwe

    Read the article

  • How do I allow inline images with data urls on .NET 4 without triggering request validation?

    - by Johan Driessen
    I'm using the jQuery jstree plugin (http://jstree.com) in a ASP.NET MVC 2 project on .NET 4 RC. It comes with some stylesheets with inline images with data urls, like this: .tree-checkbox ul { background-image:url(data:image/gif;base64,R0lGODlhAgACAIAAAB4dGf///yH5BAEAAAEALAAAAAACAAIAAAICRF4AOw==); } Now, the url for the background image contains a colon, which .NET 4 thinks is an unsafe character, so I get this error message: A potentially dangerous Request.Path value was detected from the client (:). According to the documentation, I am supposed to be able to prevent this by adding <pages validateRequest="false" /> to my Web.config, but that doesn't seem to help. I have tried adding it to the main Web.config for the application, as well as to a special Web.config in the /config folder, but to no avail. Is there any way to get .NET to allow this?

    Read the article

  • XmlDataProvider and XPath bindings don't allow default namespace of XML data?

    - by Andy Dent
    I am struggling to work out how to use default namespaces with XmlDataProvider and XPath bindings. There's an ugly answer using local-name <Binding XPath="*[local-name()='Name']" /> but that is not acceptable to the client who wants this XAML to be highly maintainable. The fallback is to force them to use non-default namespaces in the report XML but that is an undesirable solution. The XML report file looks like the following. It will only work if I remove xmlns="http://www.acme.com/xml/schemas/report so there is no default namespace. <?xml version="1.0" encoding="utf-8"?> <?xml-stylesheet type='text/xsl' href='PreviewReportImages.xsl'?> <Report xsl:schemaLocation="http://www.acme.com/xml/schemas/report BlahReport.xsd" xmlns:xsl="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.acme.com/xml/schemas/report"> <Service>Muncher</Service> <Analysis> <Date>27 Apr 2010</Date> <Time>0:09</Time> <Authoriser>Service Centre Manager</Authoriser> Which I am presenting in a window with XAML: <Window x:Class="AcmeTest.ReportPreview" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="ReportPreview" Height="300" Width="300" > <Window.Resources> <XmlDataProvider x:Key="Data"/> </Window.Resources> <StackPanel Orientation="Vertical" DataContext="{Binding Source={StaticResource Data}, XPath=Report}"> <TextBlock Text="{Binding XPath=Service}"/> </StackPanel> </Window> with code-behind used to load an XmlDocument into the XmlDataProvider (seems the only way to have loading from a file or object varying at runtime). public partial class ReportPreview : Window { private void InitXmlProvider(XmlDocument doc) { XmlDataProvider xd = (XmlDataProvider)Resources["Data"]; xd.Document = doc; } public ReportPreview(XmlDocument doc) { InitializeComponent(); InitXmlProvider(doc); } public ReportPreview(String reportPath) { InitializeComponent(); var doc = new XmlDocument(); doc.Load(reportPath); InitXmlProvider(doc); } }

    Read the article

  • How do I persist form data across an "access denied" page in Drupal?

    - by Michael T. Smith
    We're building a small sub-site that, on the front page, has a one input box form that users can submit. From there, they're taken to a page with a few more associated form fields (add more details, tag it, etc.) with the first main form field already filled in. This works splendidly, thus far. The problem comes for users that are not logged in. When they submit that first form, they're taken to a (LoginToboggan based) login page that allows them to login. After they login, they redirect to the second form page, but the first main form field isn't filled in -- in other words, the form data didn't persist. How can we store that data and have it persist across the access denied page?

    Read the article

  • What is the best way to post data from web browser to server?

    - by Kronass
    Hi, I want to know what is the best way to send data from web browser to server using post method. I've seen a practice where they wrap all the elements data in XML, convert it into Base64 string and then post it to the server (via Ajax or hidden field). this way will not work if the Javascript is disabled, any how if I ignored this. is it a good practice to wrap elements into XML (or create my custom wrapper in general) and post them to server saying it will enhance the maintainability of the code or just stick with the classical way and no need to add unnecessary text in the post.

    Read the article

  • Any suggestions on data syncronization from a server to a desktop?

    - by Jamey McElveen
    I have a web based CRM system that stores all the client data from all the clients into one database (MS Sql Server). We need to build a system that maintains a local copy of the data for each client. So basically the client database will have all tables and columns except for the ClientId that is in ever server table. I am aware that I will need to add fields to the server to support synchronization. Are there any good solutions or components already out there to help me accomplish this? We are using MS Sql Server and .NET C#

    Read the article

  • Site to Site VPN problem, connection succesful data only oneway?

    - by Charles
    To start things off, I'm not the actual Administrator for the VPN Server, but he is also at a loss so I thought I'd ask it here. I know it's a Cisco ASA Firewall/VPN. I have a router that connects to the Cisco VPN server, it does so succesfully. I can ping everything within the remote network and from the remote network into my own. I've been able to SSH into a remote server over VPN as well, it all seems to work; until there's some more data returned. A quick example would be an internal webserver. The default homepage simply redirects, so only sends back HTTP headers with a "Location:". I receive this on my computer, but when I request the actual page then (which isn't that big) I don't get a response at all - it just stalls. And it does this for other services as well, for example SSH. I can do a couple of things while connected, but if there's more than xx output it seems to do nothing. The connection remains active throughout all of this. Has anyone ever experienced anything like this before / know what the problem might be? Another user who has a site-to-site connection with this VPN using the -exact same setup- has no problems, the only difference is that I have around 200ms ping to the VPN server/network because of a very long distance (other continent).

    Read the article

  • Webservice randomly dropping connections - possibly due to firewall nonevent data?

    - by adam
    I have a hosted webapp which requests data from a REST webservice in our office. Each page calls one (or several) webservices, which go from our host, via our firewall (a Watchguard Firebox) to a server in our office. All of a sudden, the app has dramatically slowed. We have determined that the webservice is timing out at random when called externally (it's fine when called within the office network). I'm pretty certain it's our connection which is dropping the webservice call, so I've written a quick php/curl script which calls the webservice over many iterations and shows the various timings. Below is an example output, showing both a failed and a successful call (with a 5 second timeout): http_code namelookup_time connect_time pretransfer_time starttransfer_time total_time 1 0 0.000096 0.0342 0.0000 0.0000 0.0342 2 200 0.000052 0.0332 0.1327 0.1751 0.1752 As per iteration #1 above, failed requests seem to be failing between connect and pretransfer. I'm not sure if this shows that the connection is successfully past the firewall, or could the firewall still cause an issue? Our firewall is showing a series of nondata event log messages for the relevant access rule. Our IT team tells me these are routine, although I can find no mention of these in Google. I'm not sure if this fits in between connect and pretransfer. Having elinated the webservice server (by testing internally) and the live webapp (by testing different code on different external servers, I am left suspecting the connection to the office. Could the firebox nondata events be causing a problem between connect and pretransfer?

    Read the article

  • Why isn't "String or Binary data would be truncated" a more descriptive error?

    - by rwmnau
    To start: I understand what this error means - I'm not attempting to resolve an instance of it. This error is notoriously difficult to troubleshoot, because if you get it inserting a million rows into a table 100 columns wide, there's virtually no way to determine what column of what row is causing the error - you have to modify your process to insert one row at a time, and then see which one fails. That's a pain, to put it mildly. Is there any reason that the error doesn't look more like this? String or Binary data would be truncated Error inserting value "Some 18 char value" into SomeTable.SomeColumn VARCHAR(10) That would make it a lot easier to find and correct the value, if not the table structure itself. If seeing the table data is a security concern, then maybe something generic, like giving the length of the attempted value and the name of the failing column?

    Read the article

  • How to finish a broken data upload to the production Google App Engine server?

    - by WooYek
    I was uploading the data to App Engine (not dev server) through loader class and remote api, and I hit the quota in the middle of a CSV file. Based on logs and progress sqllite db, how can I select remaining portion of data to be uploaded? Going through tens of records to determine which was and which was not transfered, is not appealing task, so I look for some way to limit the number of record I need to check. Here's relevant (IMO) log portion, how to interpret work item numbers? [DEBUG 2010-03-30 03:22:51,757 bulkloader.py] [Thread-2] [1041-1050] Transferred 10 entities in 3.9 seconds [DEBUG 2010-03-30 03:22:51,757 adaptive_thread_pool.py] [Thread-2] Got work item [1071-1080] <cut> [DEBUG 2010-03-30 03:23:09,194 bulkloader.py] [Thread-1] [1141-1150] Transferred 10 entities in 4.6 seconds [DEBUG 2010-03-30 03:23:09,194 adaptive_thread_pool.py] [Thread-1] Got work item [1161-1170] <cut> [DEBUG 2010-03-30 03:23:09,226 bulkloader.py] [Thread-3] [1151-1160] Transferred 10 entities in 4.2 seconds [DEBUG 2010-03-30 03:23:09,226 adaptive_thread_pool.py] [Thread-3] Got work item [1171-1180] [ERROR 2010-03-30 03:23:10,174 bulkloader.py] Retrying on non-fatal HTTP error: 503 Service Unavailable

    Read the article

  • How to write a cctor and op= for a factory class with ptr to abstract member field?

    - by Kache4
    I'm extracting files from zip and rar archives into raw buffers. I created the following to wrap minizip and unrarlib: Archive.hpp #include "ArchiveBase.hpp" #include "ArchiveDerived.hpp" class Archive { public: Archive(string path) { /* logic here to determine type */ switch(type) { case RAR: archive_ = new ArchiveRar(path); break; case ZIP: archive_ = new ArchiveZip(path); break; case UNKNOWN_ARCHIVE: throw; break; } } Archive(Archive& other) { archive_ = // how do I copy an abstract class? } ~Archive() { delete archive_; } void passThrough(ArchiveBase::Data& data) { archive_->passThrough(data); } Archive& operator = (Archive& other) { if (this == &other) return *this; ArchiveBase* newArchive = // can't instantiate.... delete archive_; archive_ = newArchive; return *this; } private: ArchiveBase* archive_; } ArchiveBase.hpp class ArchiveBase { public: // Is there any way to put this struct in Archive instead, // so that outside classes instantiating one could use // Archive::Data instead of ArchiveBase::Data? struct Data { int field; }; virtual void passThrough(Data& data) = 0; /* more methods */ } ArchiveDerived.hpp "Derived" being "Zip" or "Rar" #include "ArchiveBase.hpp" class ArchiveDerived : public ArchiveBase { public: ArchiveDerived(string path); void passThrough(ArchiveBase::Data& data); private: /* fields needed by minizip/unrarlib */ // example zip: unzFile zipFile_; // example rar: RARHANDLE rarFile_; } ArchiveDerived.cpp #include "ArchiveDerived.hpp" ArchiveDerived::ArchiveDerived(string path) { //implement } ArchiveDerived::passThrough(ArchiveBase::Data& data) { //implement } Somebody had suggested I use this design so that I could do: Archive archiveFile(pathToZipOrRar); archiveFile.passThrough(extractParams); // yay polymorphism! How do I write a cctor for Archive? What about op= for Archive? What can I do about "renaming" ArchiveBase::Data to Archive::Data? (Both minizip and unrarlib use such structs for input and output. Data is generic for Zip & Rar and later is used to create the respective library's struct.)

    Read the article

  • What could cause the file command in Linux to report a text file as data?

    - by Jonah Bishop
    I have a couple of C++ source files (one .cpp and one .h) that are being reported as type data by the file command in Linux. When I run the file -bi command against these files, I'm given this output (same output for each file): application/octet-stream; charset=binary Each file is clearly plain-text (I can view them in vi). What's causing file to misreport the type of these files? Could it be some sort of Unicode thing? Both of these files were created in Windows-land (using Visual Studio 2005), but they're being compiled in Linux (it's a cross-platform application). Any ideas would be appreciated. Update: I don't see any null characters in either file. I found some extended characters in the .cpp file (in a comment block), removed them, but file still reports the same encoding. I've tried forcing the encoding in SlickEdit, but that didn't seem to have an effect. When I open the file in vim, I see a [converted] line as soon as I open the file. Perhaps I can get vim to force the encoding?

    Read the article

  • Why is SMF manifest losing configuration data when exported on SmartOS?

    - by Scott Lowe
    I'm running a server process under SMF (Server Management Facility) on Joyent's Base64 1.8.1 SmartOS image. For those not aqauinted with SmartOS, it is a cloud-based distribution of IllumOS with KVM. But essentially it is like Solaris and inherits from OpenSolaris. So even if you've not used SmartOS, I'm hoping to tap into some Solaris knowledge on ServerFault. My issue is that I want an unprivileged user to be allowed to restart a service that they own. I have worked out how to do that by using RBAC and adding an authorisation to /etc/security/auth_attr and associating that authorisation with my user. I then added the following to my SMF manifest for the service: <property_group name='general' type='framework'> <!-- Allow to be restarted--> <propval name='action_authorization' type='astring' value='solaris.smf.manage.my-server-process' /> <!-- Allow to be started and stopped --> <propval name='value_authorization' type='astring' value='solaris.smf.manage.my-server-process' /> </property_group> And this works well when imported. My unprivileged user is allowed to restart, start and stop its own server process (this is for automated code deployments). However, if I export the SMF manifest, this configuration data is gone... all I see in that section is this: <property_group name='general' type='framework'> <property name='action_authorization' type='astring'/> <property name='value_authorization' type='astring'/> </property_group> Does anybody know why this is happening? Is my syntax wrong, or am I simply not using SMF incorrectly?

    Read the article

  • what is the fastest way to copy all data to a new larger hard drive?

    - by SUPER user
    I was certain this would have been covered before, but I cannot find an answer amongst all the almost-duplicates that come up; sorry if I've missed something obvious. I have a full 320gb disk inside my machine, a new 1tb disk to replace it, and a USB 2.0 chassis. It is only data on a single partition, no OS/apps involved, and the old drive will be kept somewhere as backup (no secure wiping etc). The simple option would be to put new disk in USB chassis, copy files, then swap them over. But for USB pen drives, reading is around 4x faster than writing. If the same is true for a USB SATA chassis (is it?) then it would be significantly faster to swap the drives first and read from the old drive over USB, right? Then the other consideration is that copying lots of files is usually slower than a single file of equivalent size. Is Windows 7 smart enough to do everything in a single lump like that, or is there specialised software that should be used instead? (Even if SATA-SATA copying is faster than involving USB, knowing what to do when it isn't an option is useful information.) Summary: Does a USB SATA chassis suffer from a read/write inequality? (like a USB pen drive does, but unlike a direct SATA connection) Can Windows 7 do sequential access? (I can't find confirmation if Robocopy does this.) Or is it necessary to use a bootable CD/USB with something like Clonezilla to achieve sequential copy speeds?

    Read the article

< Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >