Search Results

Search found 7985 results on 320 pages for 'multi byte'.

Page 294/320 | < Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >

  • NoSQL DB for .Net document-based database (ECM)

    - by Dane
    I'm halfway through coding a basic multi-tenant SaaS ECM solution. Each client has it's own instance of the database / datastore, but the .Net app is single instance. The documents are pretty much read only (i.e. an image archive of tiffs or PDFs) I've used MSSQL so far, but then started thinking this might be viable in a NoSQL DB (e.g. MongoDB, CouchDB). The basic premise is that it stores documents, each with their own particular indexes. Each tenant can have multiple document types. e.g. One tenant might have an invoice type, which has Customer ID, Invoice Number and Invoice Date. Another tenant might have an application form, which has Member Number, Application Number, Member Name, and Application Date. So far I've used the old method which Sharepoint (used?) to use, and created a document table which has int_field_1, int_field_2, date_field_1, date_field_2, etc. Then, I've got a "mapping" table which stores the customer specific index name, and the database field that will map to. I've avoided the key-value pair model in the DB due to volume of documents. This way, we can support multiple document types in the one table, and get reasonably high performance out of it, and allow for custom document type searches (i.e. user selects a document type, then they're presented with a list of search fields). However, a NoSQL DB might make this a lot simpler, as I don't need to worry about denormalizing the document. However, I've just got concerns about the rest of the data around a document. We store an "action history" against the document. This tracks views, whether someone emails the document from within the system, and other "future" functionality (e.g. faxing). We have control over the document load process, so we can manipulate the data however it needs to be to get it in the document store (e.g. assign unique IDs). Users will not be adding in their own documents, so we shouldn't need to worry about ACID compliance, as the documents are relatively static. So, my questions I guess : Is a NoSQL DB a good fit Is MongoDB the best for Asp.Net (I saw Raven and Velocity, but they're still kinda beta) Can I store a key for each document, and then store the action history in a MSSQL DB with this key? I don't need to do joins, it would be if a person clicks "View History" against a document. How would performance compare between the two (NoSQL DB vs denormalized "document" table) Volumes would be up to 200,000 new documents per month for a single tenant. My current scaling plan with the SQL DB involves moving the SQL DB into a cluster when certain thresholds are reached, and then reviewing partitioning and indexing structures.

    Read the article

  • TcpListener is queuing connections faster than I can clear them

    - by Matthew Brindley
    As I understand it, TcpListener will queue connections once you call Start(). Each time you call AcceptTcpClient (or BeginAcceptTcpClient), it will dequeue one item from the queue. If we load test our TcpListener app by sending 1,000 connections to it at once, the queue builds far faster than we can clear it, leading (eventually) to timeouts from the client because it didn't get a response because its connection was still in the queue. However, the server doesn't appear to be under much pressure, our app isn't consuming much CPU time and the other monitored resources on the machine aren't breaking a sweat. It feels like we're not running efficiently enough right now. We're calling BeginAcceptTcpListener and then immediately handing over to a ThreadPool thread to actually do the work, then calling BeginAcceptTcpClient again. The work involved doesn't seem to put any pressure on the machine, it's basically just a 3 second sleep followed by a dictionary lookup and then a 100 byte write to the TcpClient's stream. Here's the TcpListener code we're using: // Thread signal. private static ManualResetEvent tcpClientConnected = new ManualResetEvent(false); public void DoBeginAcceptTcpClient(TcpListener listener) { // Set the event to nonsignaled state. tcpClientConnected.Reset(); listener.BeginAcceptTcpClient( new AsyncCallback(DoAcceptTcpClientCallback), listener); // Wait for signal tcpClientConnected.WaitOne(); } public void DoAcceptTcpClientCallback(IAsyncResult ar) { // Get the listener that handles the client request, and the TcpClient TcpListener listener = (TcpListener)ar.AsyncState; TcpClient client = listener.EndAcceptTcpClient(ar); if (inProduction) ThreadPool.QueueUserWorkItem(state => HandleTcpRequest(client, serverCertificate)); // With SSL else ThreadPool.QueueUserWorkItem(state => HandleTcpRequest(client)); // Without SSL // Signal the calling thread to continue. tcpClientConnected.Set(); } public void Start() { currentHandledRequests = 0; tcpListener = new TcpListener(IPAddress.Any, 10000); try { tcpListener.Start(); while (true) DoBeginAcceptTcpClient(tcpListener); } catch (SocketException) { // The TcpListener is shutting down, exit gracefully CheckBuffer(); return; } } I'm assuming the answer will be related to using Sockets instead of TcpListener, or at least using TcpListener.AcceptSocket, but I wondered how we'd go about doing that? One idea we had was to call AcceptTcpClient and immediately Enqueue the TcpClient into one of multiple Queue<TcpClient> objects. That way, we could poll those queues on separate threads (one queue per thread), without running into monitors that might block the thread while waiting for other Dequeue operations. Each queue thread could then use ThreadPool.QueueUserWorkItem to have the work done in a ThreadPool thread and then move onto dequeuing the next TcpClient in its queue. Would you recommend this approach, or is our problem that we're using TcpListener and no amount of rapid dequeueing is going to fix that?

    Read the article

  • Odd "Object reference not set to an instance of an object" involving xWinForms

    - by Kyle
    Hey, I've been trying to get the xWinForms 3.0 library (a library with forms support in xna) working with my C# XNA Game project but I keep getting the same problem. I add the reference to my project, put in the using statement, declare a formCollection variable and then I try to initialize it. whenever I run the project I get stopped on this line: formCollection = new FormCollection(this.Window, Services, ref graphics); it gives me the error: " System.NullReferenceException was unhandled Message="Object reference not set to an instance of an object." Source="Microsoft.Xna.Framework" StackTrace: at Microsoft.Xna.Framework.Graphics.VertexShader..ctor(GraphicsDevice graphicsDevice, Byte[] shaderCode) at Microsoft.Xna.Framework.Graphics.SpriteBatch.ConstructPlatformData() at Microsoft.Xna.Framework.Graphics.SpriteBatch..ctor(GraphicsDevice graphicsDevice) at xWinFormsLib.FormCollection..ctor(GameWindow window, IServiceProvider services, GraphicsDeviceManager& graphics) at GameSolution.Game2.LoadContent() in C:\Users\Owner\Documents\School\Year 3\Winter\Soen 390\TeamWTF_3\SourceCode\GameSolution\GameSolution\Game2.cs:line 45 at Microsoft.Xna.Framework.Game.Initialize() at GameSolution.Game2.Initialize() in C:\Users\Owner\Documents\School\Year 3\Winter\Soen 390\TeamWTF_3\SourceCode\GameSolution\GameSolution\Game2.cs:line 37 at Microsoft.Xna.Framework.Game.Run() at GameSolution.Program.Main(String[] args) in C:\Users\Owner\Documents\School\Year 3\Winter\Soen 390\TeamWTF_3\SourceCode\GameSolution\GameSolution\Program.cs:line 14 InnerException: " In a project I downloaded that used the xWinForms, I put the following code in and it compiled and ran no error. but when I put it in my project I get the error. Am I making some stupid mistake about including dlls or something? I've been at this for hours and I can't seem to find anything that would cause this. using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.GamerServices; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; using Microsoft.Xna.Framework.Media; using Microsoft.Xna.Framework.Net; using Microsoft.Xna.Framework.Storage; using xWinFormsLib; namespace GameSolution { public class Game2 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; FormCollection formCollection; public Game2() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } protected override void Initialize() { // TODO: Add your initialization logic here base.Initialize(); } protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); formCollection = new FormCollection(this.Window, Services, ref graphics); } protected override void Update(GameTime gameTime) { base.Update(gameTime); } protected override void Draw(GameTime gameTime) { base.Draw(gameTime); } } } Any help would be greatly appreciated ._.

    Read the article

  • Can Hudson branch promotion get based on project stability?

    - by Wayne
    Hudson CI server displays stability "weather" which is cool. And it allows one project build to kick off based on the successful build of another. However, how can you make that secondary project dependent additionally on the stability of multiple builds of the first project? Specifically, project "stable_deploy" needs to only kick off to promote a version to "stable" if project "integrate" with version 8.3.4.1233 has built and tested successfully at least 8 times--in a row. Until then, it's still in integration mode. IMPORTANT: A significant caveat to this is that a single set of Hudson projects gets used as a "pipeline" to process each new version through to release. So a project may have built successfully 8 times in a rolw but the latest version 8.3.4.1233 may be only the 2 most recent builds. The builds prior to that may be an earlier version. We're open to completely reorganizing this but the pipeline idea seemed to greatly reduce the amount of manually project creation and deletion. Is there a better way to track version release "pipeline"? In particular, we will have multiple versions in this pipeline simultaneously in the future due to fixes or patches to older versions. We don't see how to do that yet, except to create new pipeline projects for each version which is a real hassle. Here's some background details: The TickZoom application has some very complete unit tests some of which simulates real time trading environments. Add to that TickZoom makes elaborate use of parallelization for leveraging multi-core computers. Needless to say, during development of a new version, there can be stability issues during integration testing which get uncovered by running the build and auto tests repeatedly. A version which builds and tests cleanly 8 times in a row without change plus has undergone some real world testing by users can be deemed "stable" and promoted to the stable branch. Our Hudson projects look like this: test - Only for testing a build, zero user visibility. integrate_deploy - Promotes a test project build to integrate branch and makes it available to public for UA testing. integrate - Repeatedly builds the integrate branch to determine if it's stable enough to promote to stable branch. This runs the builds and test hourly throughout every night. stable_deploy - Promotes an integrate project build to the stable branch and makes it public for users who want the latest and greatest. stable - Builds the stable branch once every night. After 2 weeks of successful builds (14 builds) it can go to "release candidate". And so on... it continues with "release candidate" and then "release".

    Read the article

  • memcpy segmentation fault on linux but not os x

    - by Andre
    I'm working on implementing a log based file system for a file as a class project. I have a good amount of it working on my 64 bit OS X laptop, but when I try to run the code on the CS department's 32 bit linux machines, I get a seg fault. The API we're given allows writing DISK_SECTOR_SIZE (512) bytes at a time. Our log record consists of the 512 bytes the user wants to write as well as some metadata (which sector he wants to write to, the type of operation, etc). All in all, the size of the "record" object is 528 bytes, which means each log record spans 2 sectors on the disk. The first record writes 0-512 on sector 0, and 0-15 on sector 1. The second record writes 16-512 on sector 1, and 0-31 on sector 2. The third record writes 32-512 on sector 2, and 0-47 on sector 3. ETC. So what I do is read the two sectors I'll be modifying into 2 freshly allocated buffers, copy starting at record into buf1+the calculated offset for 512-offset bytes. This works correctly on both machines. However, the second memcpy fails. Specifically, "record+DISK_SECTOR_SIZE-offset" in the below code segfaults, but only on the linux machine. Running some random tests, it gets more curious. The linux machine reports sizeof(Record) to be 528. Therefore, if I tried to memcpy from record+500 into buf for 1 byte, it shouldn't have a problem. In fact, the biggest offset I can get from record is 254. That is, memcpy(buf1, record+254, 1) works, but memcpy(buf1, record+255, 1) segfaults. Does anyone know what I'm missing? Record *record = malloc(sizeof(Record)); record->tid = tid; record->opType = OP_WRITE; record->opArg = sector; int i; for (i = 0; i < DISK_SECTOR_SIZE; i++) { record->data[i] = buf[i]; // *buf is passed into this function } char* buf1 = malloc(DISK_SECTOR_SIZE); char* buf2 = malloc(DISK_SECTOR_SIZE); d_read(ad->disk, ad->curLogSector, buf1); d_read(ad->disk, ad->curLogSector+1, buf2); memcpy(buf1+offset, record, DISK_SECTOR_SIZE-offset); memcpy(buf2, record+DISK_SECTOR_SIZE-offset, offset+sizeof(Record)-sizeof(record->data));

    Read the article

  • How can I ensure my programmatic uploads are done in the correct order?

    - by ccomet
    In our application, we store two copies of a file - an approved one and an unapproved one. Both track their versions separately. When the unapproved is then approved, all of its versions are added as new versions to the approved file. To do this properly, my code has to upload each version separately into the approved folder, and update the item each time with that version's information. For some reason, though, this doesn't always work properly. In my latest scenario, the latest version was uploaded first, and then all of the remaining versions were uploaded afterwards. However, my code explicitly is supposed to upload the other versions first, that's the order I wrote it in. Why is this happening? And if it is possible, how do I ensure that the versions are uploaded in the correct order? Clarification - It's not a problem with the enumeration - I'm getting the previous versions in the correct order. What is happening is that the final version, which is written after the loop, is being uploaded before the loop. Which really doesn't make any sense to me. Here's a condensed version of the relevant code. //These three are initialized earlier in the code. SPList list; //The document library SPListItem item; //The list item in the Unapproved folder int AID; //The item id of the corresponding item in the Approved folder. byte[] contents; //Not initialized. /* These uploads are happening second when they should happen first. */ if (item.File.Versions.Count > 0) { //This loop is actually a separate method call if that matters. //For simplicity I expanded it here. foreach (SPFileVersion fVer in item.File.Versions) { if (!fVer.IsCurrentVersion) { contents = fVer.OpenBinary(); SPFile fSub = aFolder.Files.Add(fVer.File.Name, contents, u1, fVer.CreatedBy, dt1, fVer.Created); SPListItem subItem = list.GetItemById(AID); //This method updates the newly uploaded version with the field data of that version. UpdateFields(item.Versions.GetVersionFromLabel(fVer.VersionLabel), subItem); } } } /* This upload happens first when it should happen last. */ //Does the same as earlier loop, but for the final version. contents = item.File.OpenBinary(); SPFile f = aFolder.Files.Add(item.File.Name, contents, u1, u2, dt1, dt2); SPListItem finalItem = list.GetItemById(AID); UpdateFields(item.Versions[0], finalItem); item.Delete();

    Read the article

  • how to predict which section have to put in critical section in threading

    - by Lalit Dhake
    Hi , I am using the console application i used multi threading in the same. I just want to know which section have to put inside critical section my code is : .------------------------------------------------------------------------------. public class SendBusReachSMS { public void SchedularEntryPoint() { try { List<ActiveBusAndItsPathInfo> ActiveBusAndItsPathInfoList = BusinessLayer.GetActiveBusAndItsPathInfoList(); if (ActiveBusAndItsPathInfoList != null) { //SMSThreadEntryPoint smsentrypoint = new SMSThreadEntryPoint(); while (true) { foreach (ActiveBusAndItsPathInfo ActiveBusAndItsPathInfoObj in ActiveBusAndItsPathInfoList) { if (ActiveBusAndItsPathInfoObj.isSMSThreadActive == false) { DateTime CurrentTime = System.DateTime.Now; DateTime Bustime = Convert.ToDateTime(ActiveBusAndItsPathInfoObj.busObj.Timing); TimeSpan tsa = Bustime - CurrentTime; if (tsa.TotalMinutes > 0 && tsa.TotalMinutes < 5) { ThreadStart starter = delegate { SMSThreadEntryPointFunction(ActiveBusAndItsPathInfoObj); }; Thread t = new Thread(starter); t.Start(); t.Join(); } } } } } } catch (Exception ex) { Console.WriteLine("==========================================="); Console.WriteLine(ex.Message); Console.WriteLine(ex.InnerException); Console.WriteLine("==========================================="); } } public void SMSThreadEntryPointFunction(ActiveBusAndItsPathInfo objActiveBusAndItsPathInfo) { try { //mutThrd.WaitOne(); String consoleString = "Thread for " + objActiveBusAndItsPathInfo.busObj.Number + "\t" + " on path " + "\t" + objActiveBusAndItsPathInfo.pathObj.PathId; Console.WriteLine(consoleString); TrackingInfo trackingObj = new TrackingInfo(); string strTempBusTime = objActiveBusAndItsPathInfo.busObj.Timing; while (true) { trackingObj = BusinessLayer.get_TrackingInfoForSendingSMS(objActiveBusAndItsPathInfo.busObj.Number); if (trackingObj.latitude != 0.0 && trackingObj.longitude != 0.0) { //calculate distance double distanceOfCurrentToDestination = 4.45; TimeSpan CurrentTime = System.DateTime.Now.TimeOfDay; TimeSpan timeLimit = objActiveBusAndItsPathInfo.sessionInTime - CurrentTime; if ((distanceOfCurrentToDestination <= 5) && (timeLimit.TotalMinutes <= 5)) { Console.WriteLine("Message sent to bus number's parents: " + objActiveBusAndItsPathInfo.busObj.Number); break; } } } // mutThrd.ReleaseMutex(); } catch (Exception ex) { //throw; Console.WriteLine("==========================================="); Console.WriteLine(ex.Message); Console.WriteLine(ex.InnerException); Console.WriteLine("==========================================="); } } } Please help me in multithreading. new topic for me in .net

    Read the article

  • Python read multiline JSON

    - by Paul W
    I have been trying to use JSON to store settings for a program. I can't seem to get Python 2.6 's JSON Decoder to decode multi-line JSON strings... Here is example input: .settings file: """ {\ 'user':'username',\ 'password':'passwd',\ }\ """ I have tried a couple other syntaxes for this file, which I will specify below (with the traceback they cause). My python code for reading the file in is import json settings_text = open(".settings", "r").read() settings = json.loads(settings_text) The Traceback for this is: Traceback (most recent call last): File "json_test.py", line 4, in <module> print json.loads(text) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/json/__init__.py", line 307, in loads return _default_decoder.decode(s) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/json/decoder.py", line 322, in decode raise ValueError(errmsg("Extra data", s, end, len(s))) ValueError: Extra data: line 1 column 2 - line 7 column 1 (char 2 - 41) I assume the "Extra data" is the triple-quote. Here are the other syntaxes I have tried for the .settings file, with their respective Tracebacks: "{\ 'user':'username',\ 'pass':'passwd'\ }" Traceback (most recent call last): File "json_test.py", line 4, in <module> print json.loads(text) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/json/__init__.py", line 307, in loads return _default_decoder.decode(s) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/json/decoder.py", line 319, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/json/decoder.py", line 336, in raw_decode obj, end = self._scanner.iterscan(s, **kw).next() File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/json/scanner.py", line 55, in iterscan rval, next_pos = action(m, context) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/json/decoder.py", line 155, in JSONString return scanstring(match.string, match.end(), encoding, strict) ValueError: Invalid \escape: line 1 column 2 (char 2) '{\ "user":"username",\ "pass":"passwd",\ }' Traceback (most recent call last): File "json_test.py", line 4, in <module> print json.loads(text) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/json/__init__.py", line 307, in loads return _default_decoder.decode(s) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/json/decoder.py", line 319, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/json/decoder.py", line 338, in raw_decode raise ValueError("No JSON object could be decoded") ValueError: No JSON object could be decoded If I put the settings all on one line, it decodes fine.

    Read the article

  • Why can't I use __getattr__ with Django models?

    - by Joshmaker
    I've seen examples online of people using __getattr__ with Django models, but whenever I try I get errors. (Django 1.2.3) I don't have any problems when I am using __getattr__ on normal objects. For example: class Post(object): def __getattr__(self, name): return 42 Works just fine... >>> from blog.models import Post >>> p = Post() >>> p.random 42 Now when I try it with a Django model: from django.db import models class Post(models.Model): def __getattr__(self, name): return 42 And test it on on the interpreter: >>> from blog.models import Post >>> p = Post() ERROR: An unexpected error occurred while tokenizing input The following traceback may be corrupted or invalid The error message is: ('EOF in multi-line statement', (6, 0)) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /Users/josh/project/ in () /Users/josh/project/lib/python2.6/site-packages/django/db/models/base.pyc in init(self, *args, **kwargs) 338 if kwargs: 339 raise TypeError("'%s' is an invalid keyword argument for this function" % kwargs.keys()[0]) -- 340 signals.post_init.send(sender=self.class, instance=self) 341 342 def repr(self): /Users/josh/project/lib/python2.6/site-packages/django/dispatch/dispatcher.pyc in send(self, sender, **named) 160 161 for receiver in self._live_receivers(_make_id(sender)): -- 162 response = receiver(signal=self, sender=sender, **named) 163 responses.append((receiver, response)) 164 return responses /Users/josh/project/python2.6/site-packages/photologue/models.pyc in add_methods(sender, instance, signal, *args, **kwargs) 728 """ 729 if hasattr(instance, 'add_accessor_methods'): -- 730 instance.add_accessor_methods() 731 732 # connect the add_accessor_methods function to the post_init signal TypeError: 'int' object is not callable Can someone explain what is going on? EDIT: I may have been too abstract in the examples, here is some code that is closer to what I actually would use on the website: class Post(models.Model): title = models.CharField(max_length=255) slug = models.SlugField() date_published = models.DateTimeField() content = RichTextField('Content', blank=True, null=True) # Etc... Class CuratedPost(models.Model): post = models.ForeignKey('Post') position = models.PositiveSmallIntegerField() def __getattr__(self, name): ''' If the user tries to access a property of the CuratedPost, return the property of the Post instead... ''' return self.post.name # Etc... While I could create a property for each attribute of the Post class, that would lead to a lot of code duplication. Further more, that would mean anytime I add or edit a attribute of the Post class I would have to remember to make the same change to the CuratedPost class, which seems like a recipe for code rot.

    Read the article

  • Should I use Python or Assembly for a super fast copy program

    - by PyNEwbie
    As a maintenance issue I need to routinely (3-5 times per year) copy a repository that is now has over 20 million files and exceeds 1.5 terabytes in total disk space. I am currently using RICHCOPY, but have tried others. RICHCOPY seems the fastest but I do not believe I am getting close to the limits of the capabilities of my XP machine. I am toying around with using what I have read in The Art of Assembly Language to write a program to copy my files. My other thought is to start learning how to multi-thread in Python to do the copies. I am toying around with the idea of doing this in Assembly because it seems interesting, but while my time is not incredibly precious it is precious enough that I am trying to get a sense of whether or not I will see significant enough gains in copy speed. I am assuming that I would but I only started really learning to program 18 months and it is still more or less a hobby. Thus I may be missing some fundamental concept of what happens with interpreted languages. Any observations or experiences would be appreciated. Note, I am not looking for any code. I have already written a basic copy program in Python 2.6 that is no slower than RICHCOPY. I am looking for some observations on which will give me more speed. Right now it takes me over 50 hours to make a copy from a disk to a Drobo and then back from the Drobo to a disk. I have a LogicCube for when I am simply duplicating a disk but sometimes I need to go from a disk to Drobo or the reverse. I am thinking that given that I can sector copy a 3/4 full 2 terabyte drive using the LogicCube in under seven hours I should be able to get close to that using Assembly, but I don't know enough to know if this is valid. (Yes, sometimes ignorance is bliss) The reason I need to speed it up is I have had two or three cycles where something has happened during copy (fifty hours is a long time to expect the world to hold still) that has caused me to have to trash the copy and start over. For example, last week the water main broke under our building and shorted out the power.

    Read the article

  • Rails: AJAX Controller JS not firing...

    - by neezer
    I'm having an issue with one of my controller's AJAX functionality. Here's what I have: class PhotosController < ApplicationController # ... def create @photo = Photo.new(params[:photo]) @photo.image_content_type = MIME::Types.type_for(@photo.image_file_name).to_s @photo.image_width = Paperclip::Geometry.from_file(params[:photo][:image]).width.to_i @photo.image_height = Paperclip::Geometry.from_file(params[:photo][:image]).height.to_i @photo.save! respond_to do |format| format.js end end # ... end This is called through a POST request sent by this code: $(function() { // add photos link $('a.add-photos-link').colorbox({ overlayClose: false, onComplete: function() { wire_add_photo_modal(); } }); function wire_add_photo_modal() { <% session_key = ActionController::Base.session_options[:key] %> $('#upload_photo').uploadify({ uploader: '/swf/uploadify.swf', script: '/photos', cancelImg: '/images/buttons/cancel.png', buttonText: 'Upload Photo(s)', auto: true, queueID: 'queue', fileDataName: 'photo[image]', scriptData: { '<%= session_key %>': '<%= u cookies[session_key] %>', commit: 'Adding Photo', controller: 'photos', action: 'create', '_method': 'post', 'photo[gallery_id]': $('#gallery_id').val(), 'photo[user_id]': $('#user_id').val(), authenticity_token: encodeURIComponent('<%= u form_authenticity_token if protect_against_forgery? %>') }, multi: true }); } }); Finally, I have my response code in app/views/photos/create.js.erb: alert('photo added!'); My log file shows that the request was successful (the photo was successfully uploaded), and it even says that it rendered the create action, yet I never get the alert. My browser shows NO javascript errors. Here's the log AFTER a request from the above POST request is submitted: Processing PhotosController#create (for 127.0.0.1 at 2010-03-16 14:35:33) [POST] Parameters: {"Filename"=>"tumblr_kx74k06IuI1qzt6cxo1_400.jpg", "photo"=>{"user_id"=>"1", "image"=>#<File:/tmp/RackMultipart20100316-54303-7r2npu-0>}, "commit"=>"Adding Photo", "_edited_session"=>"edited", "folder"=>"/kakagiloon/", "authenticity_token"=>"edited", "action"=>"create", "_method"=>"post", "Upload"=>"Submit Query", "controller"=>"photos"} [paperclip] Saving attachments. [paperclip] saving /public/images/assets/kakagiloon/thumbnail/tumblr_kx74k06IuI1qzt6cxo1_400.jpg [paperclip] saving /public/images/assets/kakagiloon/profile/tumblr_kx74k06IuI1qzt6cxo1_400.jpg [paperclip] saving /public/images/assets/kakagiloon/original/tumblr_kx74k06IuI1qzt6cxo1_400.jpg Rendering photos/create Completed in 248ms (View: 1, DB: 6) | 200 OK [http://edited.local/photos] NOTE: I edited out all the SQL statements and I put "edited" in place of sensitive info. What gives? Why aren't I getting my alert();? Please let me know if you need anymore info to help me solve this issue! Thanks.

    Read the article

  • kXML (XmlPullParser) not hitting END_TAG

    - by Tejaswi Yerukalapudi
    Hello all, I'm trying to figure out a way to rewrite some of my XML parsing code. I'm currently working with kXML2 and here's my code - byte[] xmlByteArray; try { xmlByteArray = inputByteArray; ByteArrayInputStream xmlStream = new ByteArrayInputStream(xmlByteArray); InputStreamReader xmlReader = new InputStreamReader(xmlStream); KXmlParser parser = new KXmlParser(); parser.setInput(xmlReader); parser.nextTag(); while(true) { int eventType = parser.next(); String tag = parser.getName(); if(eventType == XmlPullParser.START_TAG) { System.out.println("****************** STARTING TAG "+tag+"******************"); if(tag == null || tag.equalsIgnoreCase("")) { continue; } else if(tag.equalsIgnoreCase("Category")) { // Gets the name of the category. String attribValue = parser.getAttributeValue(0); } } if(eventType == XmlPullParser.END_TAG) { System.out.println("****************** ENDING TAG "+tag+"******************"); } else if(eventType == XmlPullParser.END_DOCUMENT) { break; } } catch(Exception ex) { } My input XML is as follows - <root xmlns:sql="urn:schemas-microsoft-com:xml-sql" xmlns=""> <Category name="xyz"> <elmt1>value1</elmt1> <elmt2>value2</elmt2> </Category> <Category name="abc"> <elmt1>value1</elmt1> <elmt2>value2</elmt2> </Category> <Category name="def"> <elmt1>value1</elmt1> <elmt2>value2</elmt2> </Category> My problem briefly is, I'm expecting it to hit XmlPullParser.END_TAG when it encounters a closing xml tag. It does hit the XmlPullParser.START_TAG but it just seems to skip / ignore all the END_TAGs. Is this how is it's supposed to work? Or am I missing something? Any help is much appreciated, Teja.

    Read the article

  • Error while rendering .rdl file into pdf format

    - by Arka Chatterjee
    Hi, I an generating reports using SQL Server reporting services. I have generated a report and have put .rdl report file in the "E" drive. Now, when I am going to render the .rdl report file into pdf format,I am getting the exception : - "An error occurred during local report processing." The stack trace is follows : - " at Microsoft.Reporting.WebForms.LocalReport.InternalRender(String format, Boolean allowInternalRenderers, String deviceInfo, CreateAndRegisterStream createStreamCallback, Warning[]& warnings)\r\n at Microsoft.Reporting.WebForms.LocalReport.InternalRender(String format, Boolean allowInternalRenderers, String deviceInfo, String& mimeType, String& encoding, String& fileNameExtension, String[]& streams, Warning[]& warnings)\r\n at Microsoft.Reporting.WebForms.LocalReport.Render(String format, String deviceInfo, String& mimeType, String& encoding, String& fileNameExtension, String[]& streams, Warning[]& warnings)\r\n at SaltlakeSoft.APEX2.Controllers.TestPageController.RenderReport() in E:\Documents and Settings\Administrator\Desktop\afetbuild15thmayapex2\apex2\Controllers\TestPageController.cs:line 1626\r\n at lambda_method(ExecutionScope , ControllerBase , Object[] )\r\n at System.Web.Mvc.ActionMethodDispatcher.<c_DisplayClass1.b_0(ControllerBase controller, Object[] parameters)\r\n at System.Web.Mvc.ActionMethodDispatcher.Execute(ControllerBase controller, Object[] parameters)\r\n at System.Web.Mvc.ReflectedActionDescriptor.Execute(ControllerContext controllerContext, IDictionary2 parameters)\r\n at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary2 parameters)\r\n at System.Web.Mvc.ControllerActionInvoker.<c_DisplayClassa.b_7()\r\n at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodFilter(IActionFilter filter, ActionExecutingContext preContext, Func`1 continuation)" I am using the following code : - LocalReport report = new LocalReport(); report.ReportPath = @"E:\Report1.rdl"; List employeeCollection = empRepository.FindAll().ToList(); ReportDataSource reportDataSource = new ReportDataSource("dataSource1",employeeCollection); report.DataSources.Clear(); report.DataSources.Add(reportDataSource); report.Refresh(); string reportType = "PDF"; string mimeType; string encoding; string fileNameExtension; string deviceInfo ="" +"PDF" + "8.5in" + "11in" + "0.5in" +"1in" + "1in" +"0.5in" + ""; Warning[] warnings; string[] streams; byte[] renderedBytes; renderedBytes = report.Render(reportType,deviceInfo,out mimeType,out encoding, out fileNameExtension, out streams, out warnings); Response.Clear(); Response.ContentType = mimeType; Response.AddHeader("content-disposition", "attachment; filename=foo." + fileNameExtension); Response.BinaryWrite(renderedBytes); Response.End(); Please help me. Thanks in advance- Arka

    Read the article

  • What could cause a Labwindows/CVI C program to hate the number 2573?

    - by Adam Bard
    Using Windows So I'm reading from a binary file a list of unsigned int data values. The file contains a number of datasets listed sequentially. Here's the function to read a single dataset from a char* pointing to the start of it: function read_dataset(char* stream, t_dataset *dataset){ //...some init, including setting dataset->size; for(i=0;i<dataset->size;i++){ dataset->samples[i] = *((unsigned int *) stream); stream += sizeof(unsigned int); } //... } Where read_dataset in such a context as this: //... char buff[10000]; t_dataset* dataset = malloc( sizeof( *dataset) ); unsigned long offset = 0; for(i=0;i<number_of_datasets; i++){ fseek(fd_in, offset, SEEK_SET); if( (n = fread(buff, sizeof(char), sizeof(*dataset), fd_in)) != sizeof(*dataset) ){ break; } read_dataset(buff, *dataset); // Do something with dataset here. It's screwed up before this, I checked. offset += profileSize; } //... Everything goes swimmingly until my loop reads the number 2573. All of a sudden it starts spitting out random and huge numbers. For example, what should be ... 1831 2229 2406 2637 2609 2573 2523 2247 ... becomes ... 1831 2229 2406 2637 2609 0xDB00000A 0xC7000009 0xB2000008 ... If you think those hex numbers look suspicious, you're right. Turns out the hex values for the values that were changed are really familiar: 2573 -> 0xA0D 2523 -> 0x9DB 2247 -> 0x8C7 So apparently this number 2573 causes my stream pointer to gain a byte. This remains until the next dataset is loaded and parsed, and god forbid it contain a number 2573. I have checked a number of spots where this happens, and each one I've checked began on 2573. I admit I'm not so talented in the world of C. What could cause this is completely and entirely opaque to me.

    Read the article

  • Store a long string into mult array

    - by QLiu
    Hello All, I have a long string arrays, which looks like that var callinfo_data=new Array( "1300 135 604#<b>Monday - Friday: 9:00 a.m. to 5:30 p.m. AEST</b>", //Australia .. "0844000040#<b>lunedì-venerdì ore 10:00 - 17:00 CET</b>", //Switzerland (it) "212 356 9707#<b>Hafta içi her gün: 10:00 - 18:00</b>", //Turkey "08451610009#<b>Monday - Friday: 9:00 a.m. to 6:30 p.m. GMT</b>", //UK "866 486 6866#<b>Monday - Friday: 7:00 a.m. to 11:00 p.m. EST</b><br />Saturday: 9:00 a.m. to 8:00 p.m. EST", //USA "+31208501004#<b>Monday - Friday: 9:00 a.m. to 7:30 p.m. GMT+1</b>", //other countries " # "); As you see, it contact Phone number and open time. I can use split to separet them into info=callinfo_data[n].split("#"); two sections, And then i can represent them in HTML like "<div id ='phoneNumber'>"+info[0]+"</div><div id='openTime'>"+info[1]+"</div>" But my display phone number function will read the cookie variables, and then select the right contact info to display. Like, phone=callinfo_data[2].split("#"); if (locale == 'UK') details = phone[0]+ build_dropdown(locale); else if (locale == 'fr') details = 'French Contact Details<br>'+build_dropdown(locale); else if (locale == 'be') details = 'Belgian Contact Details<br>'+ build_dropdown(locale); else details = 'Unknown Contact Detail'; writeContactInfo(details); My questions are how I can build a function to load phone number and time based on my cookie variables, UK in a smart way. I can hard code everything, but i think it is too silly. I have to write a long code like: phone1= allinfo_data[0].split("#"); phone1= allinfo_data[1].split("#"); ... etc Second questions, how can I load this long arrays into easy access multi arrays? Thank you Regards, Qing

    Read the article

  • Set the Dropdown box as selected in Javascript

    - by Aruna
    I am using Javascript in my app. In my table, I have a column named industry which contains value like id 69 name :aruna username :aruna email :[email protected] password: bd09935e199eab3d48b306d181b4e3d1:i0ODTgVXbWyP1iFOV type : 3 industry: | Insurance | Analytics | EIS and Process Engineering actually this industry value is inserted from a dropdown box multi select.. now i am trying like on load to make my form as to contain these values where industry is dropdown box <select id="ind1" moslabel="Industry" onClick="industryfn();"mosreq="0" multiple="multiple" size="3" class="inputbox" name="industry1[]">'+ <option value="Banking and Financial Services">Banking and Financial Services</option> <option value="Insurance">Insurance</option> <option value="Telecom">Telecom</option> <option value="Government ">Government </option> <option value="Healthcare &amp; Life sciences">Healthcare & Life sciences</option> <option value="Energy">Energy</option> <option value="Retail &amp;Consumer products">Retail &Consumer products</option> <option value="Energy, resources &amp; utilities">Energy, resources & utilities</option> <option value="Travel and Hospitality">Travel and Hospitality</option> <option value="Manufacturing">Manufacturing</option> <option value="High Tech">High Tech</option> <option value="Media and Information Services">Media and Information Services</option> </select> How to keep the industry values(| Insurance | Analytics | EIS and Process Engineering ) as selected? EDIT: Window.onDomReady(function(){ user-get('industry'); $s=explode('|', $str) ? var selectedFields = new Array(); <?php for($i=1;$i<count($s);$i++){?> selectedFields.push("<?php echo $s[$i];?>"); <?php }?> for(i=1;i<selectedFields.length;i++) { var select=selectedFields[i]; for (var ii = 0; ii < document.getElementById('ind1').length; ii++) { var value=document.getElementById('ind1').options[ii].value; alert(value); alert(select); if(value==select) { document.getElementById('ind1').options[ii].selected=selected; }//If } //inner For }//outer For </script> i have tried the above the alert functions are working correctly. But the if loop didnt works correctly .. Why so ..Please help me....

    Read the article

  • Meteor exception in Meteor.flush when updating a collection breaks reactivity across clients

    - by Harel
    When I'm calling Collection.update from the front end (the method call is allowed), the update does work ok, but the exception below is being thrown (in Chrome's JS console, not in the server). Although the update took place, other clients connected to the same collection do not see the updates until they refresh the browser - I suspect because of the exception. Any idea what might cause this? Exception from Meteor.flush: Error: Can't create second landmark in same branch at Object.Spark.createLandmark (http://checkadoo.com/packages/spark/spark.js?8b4e0abcbf865e6ad778592160ec3b3401d7abd2:1085:13) at http://checkadoo.com/packages/templating/deftemplate.js?7f4bb363e9e340dbaaea8d74ac670af40ac82d0a:115:26 at Object.Spark.labelBranch (http://checkadoo.com/packages/spark/spark.js?8b4e0abcbf865e6ad778592160ec3b3401d7abd2:1030:14) at Object.partial [as list_item] (http://checkadoo.com/packages/templating/deftemplate.js?7f4bb363e9e340dbaaea8d74ac670af40ac82d0a:114:24) at http://checkadoo.com/packages/handlebars/evaluate.js?ab265dbab665c32cfd7ec343166437f2e03f1a54:349:48 at Object.Spark.labelBranch (http://checkadoo.com/packages/spark/spark.js?8b4e0abcbf865e6ad778592160ec3b3401d7abd2:1030:14) at branch (http://checkadoo.com/packages/handlebars/evaluate.js?ab265dbab665c32cfd7ec343166437f2e03f1a54:308:20) at http://checkadoo.com/packages/handlebars/evaluate.js?ab265dbab665c32cfd7ec343166437f2e03f1a54:348:20 at Array.forEach (native) at Function._.each._.forEach (http://checkadoo.com/packages/underscore/underscore.js?772b2587aa2fa345fb760eff9ebe5acd97937243:76:11) EDIT Here is my template for the clickable item that triggers the update: <template name="list_item"> <li class="checklistitemli"> <div class="{{checkbox_class}}" id="clitem_{{index}}"> <input type="checkbox" name="item_checked" value="1" id="clcheck_{{index}}" class="checklist_item_check" {{checkbox_ticked}}> {{title}} </div> </li> </template> and here's the event handler for clicks on 'list_item': var visualCheck = function(el, checked) { var checkId = el.id.replace('clitem','clcheck'); if (checked) { addClass(el, 'strikethrough'); $('') } else { removeClass(el, 'strikethrough'); } $('#'+checkId)[0].checked=checked; }; Template.list_item.events = { 'click .checklistitem' : function(ev) { this.checked = !this.checked; var reverseState = !this.checked; var updateItem = {}, self = this, el = ev.target; visualCheck(el, this.checked); updateItem['items.'+this.index+'.checked'] = this.checked; console.log("The error happens here"); Lists.update({_id: this._id}, {$set:updateItem}, {multi:false} , function(err) { console.log("In callback, after the error"); if (err) { visualCheck(el, reverseState); } }); } } The whole thing is available at http://checkadoo.com (Its a port of a Tornado based Python app of mine)

    Read the article

  • Persistent (purely functional) Red-Black trees on disk performance

    - by Waneck
    I'm studying the best data structures to implement a simple open-source object temporal database, and currently I'm very fond of using Persistent Red-Black trees to do it. My main reasons for using persistent data structures is first of all to minimize the use of locks, so the database can be as parallel as possible. Also it will be easier to implement ACID transactions and even being able to abstract the database to work in parallel on a cluster of some kind. The great thing of this approach is that it makes possible implementing temporal databases almost for free. And this is something quite nice to have, specially for web and for data analysis (e.g. trends). All of this is very cool, but I'm a little suspicious about the overall performance of using a persistent data structure on disk. Even though there are some very fast disks available today, and all writes can be done asynchronously, so a response is always immediate, I don't want to build all application under a false premise, only to realize it isn't really a good way to do it. Here's my line of thought: - Since all writes are done asynchronously, and using a persistent data structure will enable not to invalidate the previous - and currently valid - structure, the write time isn't really a bottleneck. - There are some literature on structures like this that are exactly for disk usage. But it seems to me that these techniques will add more read overhead to achieve faster writes. But I think that exactly the opposite is preferable. Also many of these techniques really do end up with a multi-versioned trees, but they aren't strictly immutable, which is something very crucial to justify the persistent overhead. - I know there still will have to be some kind of locking when appending values to the database, and I also know there should be a good garbage collecting logic if not all versions are to be maintained (otherwise the file size will surely rise dramatically). Also a delta compression system could be thought about. - Of all search trees structures, I really think Red-Blacks are the most close to what I need, since they offer the least number of rotations. But there are some possible pitfalls along the way: - Asynchronous writes -could- affect applications that need the data in real time. But I don't think that is the case with web applications, most of the time. Also when real-time data is needed, another solutions could be devised, like a check-in/check-out system of specific data that will need to be worked on a more real-time manner. - Also they could lead to some commit conflicts, though I fail to think of a good example of when it could happen. Also commit conflicts can occur in normal RDBMS, if two threads are working with the same data, right? - The overhead of having an immutable interface like this will grow exponentially and everything is doomed to fail soon, so this all is a bad idea. Any thoughts? Thanks! edit: There seems to be a misunderstanding of what a persistent data structure is: http://en.wikipedia.org/wiki/Persistent_data_structure

    Read the article

  • php upload file function

    - by Jacksta
    I am trying to write a script which uploads a file via a html form. When I click submit nothing happens. file: upload_form.html <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Untitled Document</title> </head> <body> <form action="do_upload.php" method="post" enctype="multipart/form-data"></form> <p><strong>File to upload</strong></p> <p><input name="img1" type="file" size="30" /></p> <p><input name="submit" type="submit" value="Upolad File" /></p> </body> </html> file: do_upload.php <?php if ($_FILES[img1] != "" { @copy($_FILES[img1] [tm_name], "/tmp" .$_FILES[img1][name]) or die("couldnt copy the file"); } else { die("no file specified"); } ?> <HTML> <head> <title>Successfull File Upload</title> </head> <body> <h1>Success</h1> <p>You sent: <? echo $_FILES[img1][name]; ?>, a <? echo $_FILES[img1][size]; ?>byte filw with a mime type of <? echo $_FILES[img1][type]; ?></p> </body> </HTML>

    Read the article

  • Streaming binary data to WCF rest service gives Bad Request (400) when content length is greater than 64k

    - by Mikey Cee
    I have a WCF service that takes a stream: [ServiceContract] public class UploadService : BaseService { [OperationContract] [WebInvoke(BodyStyle=WebMessageBodyStyle.Bare, Method=WebRequestMethods.Http.Post)] public void Upload(Stream data) { // etc. } } This method is to allow my Silverlight application to upload large binary files, the easiest way being to craft the HTTP request by hand from the client. Here is the code in the Silverlight client that does this: const int contentLength = 64 * 1024; // 64 Kb var request = (HttpWebRequest)WebRequest.Create("http://localhost:8732/UploadService/"); request.AllowWriteStreamBuffering = false; request.Method = WebRequestMethods.Http.Post; request.ContentType = "application/octet-stream"; request.ContentLength = contentLength; using (var outputStream = request.GetRequestStream()) { outputStream.Write(new byte[contentLength], 0, contentLength); outputStream.Flush(); using (var response = request.GetResponse()); } Now, in the case above, where I am streaming 64 kB of data (or less), this works OK and if I set a breakpoint in my WCF method, and I can examine the stream and see 64 kB worth of zeros - yay! The problem arises if I send anything more than 64 kB of data, for instance by changing the first line of my client code to the following: const int contentLength = 64 * 1024 + 1; // 64 kB + 1 B This now throws an exception when I call request.GetResponse(): The remote server returned an error: (400) Bad Request. In my WCF configuration I have set maxReceivedMessageSize, maxBufferSize and maxBufferPoolSize to 2147483647, but to no avail. Here are the relevant sections from my service's app.config: <service name="UploadService"> <endpoint address="" binding="webHttpBinding" bindingName="StreamedRequestWebBinding" contract="UploadService" behaviorConfiguration="webBehavior"> <identity> <dns value="localhost" /> </identity> </endpoint> <host> <baseAddresses> <add baseAddress="http://localhost:8732/UploadService/" /> </baseAddresses> </host> </service> <bindings> <webHttpBinding> <binding name="StreamedRequestWebBinding" bypassProxyOnLocal="true" useDefaultWebProxy="false" hostNameComparisonMode="WeakWildcard" sendTimeout="00:05:00" openTimeout="00:05:00" receiveTimeout="00:05:00" maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" maxBufferPoolSize="2147483647" transferMode="StreamedRequest"> <readerQuotas maxArrayLength="2147483647" maxStringContentLength="2147483647" /> </binding> </webHttpBinding> </bindings> <behaviors> <endpointBehaviors> <behavior name="webBehavior"> <webHttp /> </behavior> <endpointBehaviors> </behaviors> How do I make my service accept more than 64 kB of streamed post data?

    Read the article

  • How can I refactor out needing so many for-loops in rails?

    - by Angela
    I need help refactoring this multi-loop thing. Here is what I have: Campaign has_many Contacts Campaign also has many templates for EVent (Email, Call, and Letter). I need a list of all the Emails, Calls and Letters that are "overdue" for every Contact that belongs to a Campaign. Overdue is determined by a from_today method which looks at the date the Contact was entered in the system and the number of days that needs to pass for any given Event. from_today() outputs the number of days from today that the Event should be done for a given Contact. Here is what I've done, it works for all Emails in a Campaign across all contacts. I was going to try to create another each do loop to change the class names. Wasn't sure where to begin: named_scope, push some things into a method, etcetera, or -- minimum to be able to dynamically change the class names so at least it loops three timees across the different events rather than repeating the code three times: <% @campaigns.each do |campaign| %> <h2><%= link_to campaign.name, campaign %></h2> <% @events.each do |event| %> <%= event %> <% for email in campaign.emails %> <h4><%= link_to email.title, email %> <%= email.days %> days</h4> <% for contact in campaign.contacts.find(:all, :order => "date_entered ASC" ) %> <% if (from_today(contact, email.days) < 0) %> <% if show_status(contact, email) == 'no status'%> <p> <%= full_name(contact) %> is <%= from_today(contact,email.days).abs%> days overdue: <%= do_event(contact, email) %> </p> <% end %> <% end %> <% end %> <% end %> <% end %> <% end %>

    Read the article

  • Zaypay alternatives for payments using call or sms

    - by JohannesH
    We are currently trying to implement a payment provider in zaypay for paying for services using sms or by calling a number. We already have google checkout and paypal working for regular payments but zaypay is rather inflexible, poorly documented and a pain to setup when you have hundreds of products with varying prices. So my question is, do you know of any other european payment providers that take sms and call payments? As a response to Roberts answer/question Hi Robert, I must say that the Zaypay solution is the best and only I've seen thus far regarding phoned payments. However, since its now 2 months ago I finished the implementation of our custom Zaypay UI I can't remember much of the the details of the problems we were having. I'll try to give a brief of them anyways the best I can. First of all I would like to see a redirection type scenario for payalogues. From what I remember you guys are using the JS framework "Prototype" which doesn't play nice with jQuery which we are using so we weren't able to use the popup-type scenario supported by payalogues. Furthermore when implementing our custom interface I remember a lot of missing translations, like words that were codes instead of a word or a phrase. This meant we ended up writing/translating all the messages we needed ourselves. Also, another point of annoyance was the setup of prices and items. I wish we could just send in the order items/prices as a part of the interface like you can in Google Checkout or PayPal (not that they're flawless either), instead of having to define ALL the items you will ever sell through your admin interface beforehand. As far as I can remember it is virtually impossible to use Zaypay for a multi-item order in its current form. Finally there are, as far as I can tell, some security issues that you have to think about when you implement a custom solution... especially a ajax driven one. As I said in my original post you do mention this in the documentation but I believe the documentation wasn't that comprehensive regarding security issues. Again I wish I could give more details but the code & client is long since gone, so I can't look up the comments I wrote. Sorry! Oh yeah, the general API documentation weren't exactly comprehensive and 100% correct either. Again, I don't want to advice people against using Zaypay, I just want to advice that they should try it out first on a realistic prototype and think about their implementation before releasing to production. Maybe its just me who misunderstood a lot of things but I generally had a difficult time using your framework and I was left with a feeling that the API was very new and not thought through from the beginning.

    Read the article

  • Java getInputStreat SocketTimeoutException instead of NoRouteToHostException

    - by Jon
    I have an odd issue happening when trying to open multiple Input Streams (in separate threads) on Linux (RHEL). The behaviour works as expected on windows. I am kicking off 3 threads to open https connections to 3 different servers. All three are invalid IP addresses (in this test case), so I expect an NoRouteToHostException for each of them. The first two return these as expected, and quite quickly. (see stack trace below) However the third (and 4th when I tested it that way) do NOT give a no route exception. They wait for ages, and then give a SocketTimeoutException (see other stack trace below). This takes ages to come back, and does not accurately express the connection issue. The offending line of code is: reader = new BufferedReader(new InputStreamReader(conn.getInputStream())); Has anyone seen something like this before? Are there multi-threading issues with sockets on REHL or some limit somewhere to how many can connect at once...or...something? Expected stack trace, as received for first two: java.net.NoRouteToHostException: No route to host at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:529) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:559) at sun.net.NetworkClient.doConnect(NetworkClient.java:158) at sun.net.www.http.HttpClient.openServer(HttpClient.java:394) at sun.net.www.http.HttpClient.openServer(HttpClient.java:529) at sun.net.www.protocol.https.HttpsClient.(HttpsClient.java:272) at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:329) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:172) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:916) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:158) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1177) at sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:234) Unexpected stack trace, as received on 3rd: java.net.SocketTimeoutException: connect timed out at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:529) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:559) at sun.net.NetworkClient.doConnect(NetworkClient.java:158) at sun.net.www.http.HttpClient.openServer(HttpClient.java:394) at sun.net.www.http.HttpClient.openServer(HttpClient.java:529) at sun.net.www.protocol.https.HttpsClient.(HttpsClient.java:272) at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:329) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:172) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:916) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:158) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1177) at sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:234)

    Read the article

  • Performance tuning of a Hibernate+Spring+MySQL project operation that stores images uploaded by user

    - by Umar
    Hi I am working on a web project that is Spring+Hibernate+MySQL based. I am stuck at a point where I have to store images uploaded by a user into the database. Although I have written some code that works well for now, but I believe that things will mess up when the project would go live. Here's my domain class that carries the image bytes: @Entity public class Picture implements java.io.Serializable{ long id; byte[] data; ... // getters and setters } And here's my controller that saves the file on submit: public class PictureUploadFormController extends AbstractBaseFormController{ ... protected ModelAndView onSubmit(HttpServletRequest request, HttpServletResponse response, Object command, BindException errors) throws Exception{ MutlipartFile file; // getting MultipartFile from the command object ... // beginning hibernate transaction ... Picture p=new Picture(); p.setData(file.getBytes()); pictureDAO.makePersistent(p); // this method simply calls getSession().saveOrUpdate(p) // committing hiernate transaction ... } ... } Obviously a bad piece of code. Is there anyway I could use InputStream or Blob to save the data, instead of first loading all the bytes from the user into the memory and then pushing them into the database? I did some research on hibernate's support for Blob, and found this in Hibernate In Action book: java.sql.Blob and java.sql.Clob are the most efficient way to handle large objects in Java. Unfortunately, an instance of Blob or Clob is only useable until the JDBC transaction completes. So if your persistent class defines a property of java.sql.Clob or java.sql.Blob (not a good idea anyway), you’ll be restricted in how instances of the class may be used. In particular, you won’t be able to use instances of that class as detached objects. Furthermore, many JDBC drivers don’t feature working support for java.sql.Blob and java.sql.Clob. Therefore, it makes more sense to map large objects using the binary or text mapping type, assuming retrieval of the entire large object into memory isn’t a performance killer. Note you can find up-to-date design patterns and tips for large object usage on the Hibernate website, with tricks for particular platforms. Now apparently the Blob cannot be used, as it is not a good idea anyway, what else could be used to improve the performance? I couldn't find any up-to-date design pattern or any useful information on Hibernate website. So any help/recommendations from stackoverflowers will be much appreciated. Thanks

    Read the article

  • Is there a way to split a widescreen monitor in to two or more virtual monitors?

    - by Mike Thompson
    Like most developers I have grown to love dual monitors. I won't go into all the reasons for their goodness; just take it as a given. However, they are not perfect. You can never seem to line them up "just right". You always end up with the monitors at slight funny angles. And of course the bezel always gets in the way. And this is with identical monitors. The problem is much worse with different monitors -- VMWare's multi monitor feature won't even work with monitors of differnt resolutions. When you use multiple monnitors, one of them becomes your primary monitor of focus. Your focus may flip from one monitor to the other, but at any point in time you are usually focusing on only one monitor. There are exceptions to this (WinDiff, Excel), but this is generally the case. I suggest that having a single large monitor with all the benefits of multiple smaller monitors would be a better solution. Wide screen monitors are fantastic, but it is hard to use all the space efficiently. If you are writing code you are generally working on the left-hand side of the window. If you maximize an editor on a wide-screen monitor the right-hand side of the window will be a sea of white. Programs like WinSplit Revolution will help to organise your windows, but this is really just addressing the symptom, not the problem. Even with WinSplit Revolution, when you maximise a window it will take up the whole screen. You can't lock a window into a specific section of the screen. This is where virtual monitors comes in. What would be really nice is a video driver that sits on top of the existing driver, but allows a single monitor to be virtualised into multiple monitors. Control Panel would see your single physical monitor as two or more virtual monitors. The software could even support a virtual bezel to emphasise what is happening, or you could opt for seamless mode. Programs like WinSplit Revolution and UltraMon would still work. This virtual video driver would allow you to slice & dice your physical monitor into as many virtual monitors as you want. Does anybody know if such software exists? If not, are there any budding Windows display driver guru's out there willing to take up the challenge? I am not after the myriad of virtual desktop/window manager programs that are available. I get frustrated with these programs. They seem good at first but they usually have some strange behaviour and don't work well with other programs (such as WinSplit Revolution). I want the real thing!

    Read the article

< Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >