Search Results

Search found 9980 results on 400 pages for 'kernel extension'.

Page 374/400 | < Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >

  • how Can we get the output format to CSV instead of HTML in Alfresco using webscripts?

    - by pavan123
    how Can we change the output format to CSV instead of HTML in Alfresco using webscripts? below are the my corresponding FTL and Webscript files recursive.get.html.ftl <#macro recurse_macro node depth> <#if node.isContainer> <tr> <td> ${node.properties.name} </td> <td></td> </tr> <#list node.children as child> <#if child.isContainer> <@recurse_macro node=child depth=depth+1/> <#list child.children as child2> <#if child2.isDocument> <tr><td></td><td>${child2.properties.name}</td></tr> </#if> </#list> </#if> </#list> </#if> </#macro> Recursive Listing of Spaces & Documents: Space Document recursive.get.desc.xml <webscript> <shortname>recurcive</shortname> <description>Recursive</description> <url>/sample/recursive/{recursive}</url> <format default="html">extension</format> <authentication>guest</authentication> </webscript> and html output is Recursive Listing of Spaces & Documents: Space Document Company Home Data Dictionary Space Templates Software Engineering Project Documentation Drafts Pending Approval Published Samples system-overview.html Discussions UI Design Presentations Quality Assurance Presentation Templates doc_info.ftl localizable.ftl my_docs.ftl my_spaces.ftl my_summary.ftl translatable.ftl recent_docs.ftl general_example.ftl my_docs_inline.ftl show_audit.ftl readme.ftl Email Templates notify_user_email.ftl invite_user_email.ftl RSS Templates RSS_2.0_recent_docs.ftl Saved Searches admin Scripts backup.js example test script.js backup and log.js append copyright.js alfresco docs.js test return value.js Web Scripts org alfresco sample blogsearch.get.js blogsearch.get.atom.ftl blogsearch.get.desc.xml blogsearch.get.html.ftl blogsearch.get.html.400.ftl blogsearch.get.atom.400.ftl categorysearch.get.js categorysearch.get.atom.ftl categorysearch.get.desc.xml categorysearch.get.html.ftl categorysearch.get.html.404.ftl categorysearch.get.atom.404.ftl folder.get.js folder.get.atom.ftl folder.get.desc.xml folder.get.html.ftl avmstores.get.desc.xml avmstores.get.html.ftl avmbrowse.get.js avmbrowse.get.desc.xml avmbrowse.get.html.ftl recursive.get.desc.xml recursive.get.html.ftl sgs.get.desc.xml sgs.get.csv.ftl sample1.get.desc.xml sample1.get.csv.ftl first.get.desc.xml first.get.text.ftl rag.get.html.ftl rag.get.desc.xml new1.get.desc.xml new1.get.html.ftl excel.get.html.ftl excel.get.desc.xml sgs1.get.desc.xml one.get.html.ftl one.get.desc.xml one.get.js readme.html Web Scripts Extensions readme.html Guest Home Alfresco-Tutorial.pdf User Homes isabel Users Home

    Read the article

  • FFMPEG-PHP Windows "Can't open movie file"

    - by bah
    Hi, ffmpeg extension is loaded as it is shown at phpinfo(), my file and script are at the same location, but I'm still getting this error. Warning: Can't open movie file Untitled.avi in C:\xampp\htdocs\skelbiu\fetch.php on line 4 Fatal error: Call to a member function getDuration() on a non-object in C:\xampp\htdocs\skelbiu\fetch.php on line 5 My script: extension_loaded('ffmpeg') or die('Error in loading ffmpeg'); $ffmpegInstance = new ffmpeg_movie('Untitled.avi'); echo "getDuration: " . $ffmpegInstance->getDuration() . "getFrameCount: " . $ffmpegInstance->getFrameCount() . "getFrameRate: " . $ffmpegInstance->getFrameRate() . "getFilename: " . $ffmpegInstance->getFilename() . "getComment: " . $ffmpegInstance->getComment() . "getTitle: " . $ffmpegInstance->getTitle() . "getAuthor: " . $ffmpegInstance->getAuthor() . "getCopyright: " . $ffmpegInstance->getCopyright() . "getArtist: " . $ffmpegInstance->getArtist() . "getGenre: " . $ffmpegInstance->getGenre() . "getTrackNumber: " . $ffmpegInstance->getTrackNumber() . "getYear: " . $ffmpegInstance->getYear() . "getFrameHeight: " . $ffmpegInstance->getFrameHeight() . "getFrameWidth: " . $ffmpegInstance->getFrameWidth() . "getPixelFormat: " . $ffmpegInstance->getPixelFormat() . "getBitRate: " . $ffmpegInstance->getBitRate() . "getVideoBitRate: " . $ffmpegInstance->getVideoBitRate() . "getAudioBitRate: " . $ffmpegInstance->getAudioBitRate() . "getAudioSampleRate: " . $ffmpegInstance->getAudioSampleRate() . "getVideoCodec: " . $ffmpegInstance->getVideoCodec() . "getAudioCodec: " . $ffmpegInstance->getAudioCodec() . "getAudioChannels: " . $ffmpegInstance->getAudioChannels() . "hasAudio: " . $ffmpegInstance->hasAudio(); I'm using php 5.2.9 (XAMPP 1.7.1), Windows 7. Thanks in advance!

    Read the article

  • How can I have a Makefile automatically rebuild source files that include a modified header file? (I

    - by Nicholas Flynt
    I have the following makefile that I use to build a program (a kernel, actually) that I'm working on. Its from scratch and I'm learning about the process, so its not perfect, but I think its powerful enough at this point for my level of experience writing makefiles. AS = nasm CC = gcc LD = ld TARGET = core BUILD = build SOURCES = source INCLUDE = include ASM = assembly VPATH = $(SOURCES) CFLAGS = -Wall -O -fstrength-reduce -fomit-frame-pointer -finline-functions \ -nostdinc -fno-builtin -I $(INCLUDE) ASFLAGS = -f elf #CFILES = core.c consoleio.c system.c CFILES = $(foreach dir,$(SOURCES),$(notdir $(wildcard $(dir)/*.c))) SFILES = assembly/start.asm SOBJS = $(SFILES:.asm=.o) COBJS = $(CFILES:.c=.o) OBJS = $(SOBJS) $(COBJS) build : $(TARGET).img $(TARGET).img : $(TARGET).elf c:/python26/python.exe concat.py stage1 stage2 pad.bin core.elf floppy.img $(TARGET).elf : $(OBJS) $(LD) -T link.ld -o $@ $^ $(SOBJS) : $(SFILES) $(AS) $(ASFLAGS) $< -o $@ %.o: %.c @echo Compiling $<... $(CC) $(CFLAGS) -c -o $@ $< #Clean Script - Should clear out all .o files everywhere and all that. clean: -del *.img -del *.o -del assembly\*.o -del core.elf My main issue with this makefile is that when I modify a header file that one or more C files include, the C files aren't rebuilt. I can fix this quite easily by having all of my header files be dependencies for all of my C files, but that would effectively cause a complete rebuild of the project any time I changed/added a header file, which would not be very graceful. What I want is for only the C files that include the header file I change to be rebuilt, and for the entire project to be linked again. I can do the linking by causing all header files to be dependencies of the target, but I cannot figure out how to make the C files be invalidated when their included header files are newer. I've heard that GCC has some commands to make this possible (so the makefile can somehow figure out which files need to be rebuilt) but I can't for the life of me find an actual implementation example to look at. Can someone post a solution that will enable this behavior in a makefile? EDIT: I should clarify, I'm familiar with the concept of putting the individual targets in and having each target.o require the header files. That requires me to be editing the makefile every time I include a header file somewhere, which is a bit of a pain. I'm looking for a solution that can derive the header file dependencies on its own, which I'm fairly certain I've seen in other projects.

    Read the article

  • Unable to install gem "pg" on Ubuntu 12.10 (AMD64)

    - by Lynx_Eyes
    I've been (unsuccessfully) trying to install the "pg" gem on my ruby 1.9.3-p286 but nothing seems to work. I've already installed postgresql (9.1), libpq-dev and a few others like postgresql-server-dev-9.1. I've tried to pass the "with-pg-config" flag to the gem install but simply nothing seems to work. Every time I try to install the gem it outputs something like this: Building native extensions. This could take a while... ERROR: Error installing pg: ERROR: Failed to build gem native extension. /home/lynux/.rvm/rubies/ruby-1.9.3-p286/bin/ruby extconf.rb checking for pg_config... yes Using config values from /usr/bin/pg_config checking for libpq-fe.h... yes checking for libpq/libpq-fs.h... yes checking for pg_config_manual.h... yes checking for PQconnectdb() in -lpq... no checking for PQconnectdb() in -llibpq... no checking for PQconnectdb() in -lms/libpq... no Can't find the PostgreSQL client library (libpq) *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/home/lynux/.rvm/rubies/ruby-1.9.3-p286/bin/ruby --with-pg --without-pg --with-pg-dir --without-pg-dir --with-pg-include --without-pg-include=${pg-dir}/include --with-pg-lib --without-pg-lib=${pg-dir}/lib --with-pg-config --without-pg-config --with-pg_config --without-pg_config --with-pqlib --without-pqlib --with-libpqlib --without-libpqlib --with-ms/libpqlib --without-ms/libpqlib Gem files will remain installed in /home/lynux/.rvm/gems/ruby-1.9.3-p286@phisiodata/gems/pg-0.14.1 for inspection. Results logged to /home/lynux/.rvm/gems/ruby-1.9.3-p286@phisiodata/gems/pg-0.14.1/ext/gem_make.out What am I doing wrong? Is there something else that I should do before trying to install the gem? Thank you in advance. [EDIT] Ok, so joelparkerhenderson's answer set me to think that there might me something wrong with paths and libraries and a went on digging a little bit further.. I've found this awesome post and it solved! Basically the problem lies with RVM. So, my problem is solved and for anyone out there that might suffer from the same thing, follow the link!

    Read the article

  • Need help converting Ruby code to php code

    - by newprog
    Yesterday I posted this queston. Today I found the code which I need but written in Ruby. Some parts of code I have understood (I don't know Ruby) but there is one part that I can't. I think people who know ruby and php can help me understand this code. def do_create(image) # Clear any old info in case of a re-submit FIELDS_TO_CLEAR.each { |field| image.send(field+'=', nil) } image.save # Compose request vm_params = Hash.new # Submitting a file in ruby requires opening it and then reading the contents into the post body file = File.open(image.filename_in, "rb") # Populate the parameters and compute the signature # Normally you would do this in a subroutine - for maximum clarity all # parameters are explicitly spelled out here. vm_params["image"] = file # Contents will be read by the multipart object created below vm_params["image_checksum"] = image.image_checksum vm_params["start_job"] = 'vectorize' vm_params["image_type"] = image.image_type if image.image_type != 'none' vm_params["image_complexity"] = image.image_complexity if image.image_complexity != 'none' vm_params["image_num_colors"] = image.image_num_colors if image.image_num_colors != '' vm_params["image_colors"] = image.image_colors if image.image_colors != '' vm_params["expire_at"] = image.expire_at if image.expire_at != '' vm_params["licensee_id"] = DEVELOPER_ID #in php it's like this $vm_params["sequence_number"] = -rand(100000000);????? vm_params["sequence_number"] = Kernel.rand(1000000000) # Use a negative value to force an error when calling the test server vm_params["timestamp"] = Time.new.utc.httpdate string_to_sign = CREATE_URL + # Start out with the URL being called... #vm_params["image"].to_s + # ... don't include the file per se - use the checksum instead vm_params["image_checksum"].to_s + # ... then include all regular parameters vm_params["start_job"].to_s + vm_params["image_type"].to_s + vm_params["image_complexity"].to_s + # (nil.to_s => '', so this is fine for vm_params we don't use) vm_params["image_num_colors"].to_s + vm_params["image_colors"].to_s + vm_params["expire_at"].to_s + vm_params["licensee_id"].to_s + # ... then do all the security parameters vm_params["sequence_number"].to_s + vm_params["timestamp"].to_s vm_params["signature"] = sign(string_to_sign) #no problem # Workaround class for handling multipart posts mp = Multipart::MultipartPost.new query, headers = mp.prepare_query(vm_params) # Handles the file parameter in a special way (see /lib/multipart.rb) file.close # mp has read the contents, we can close the file now response = post_form(URI.parse(CREATE_URL), query, headers) logger.info(response.body) response_hash = ActiveSupport::JSON.decode(response.body) # Decode the JSON response string ##I have understood below def sign(string_to_sign) #logger.info("String to sign: '#{string_to_sign}'") Base64.encode64(HMAC::SHA1.digest(DEVELOPER_KEY, string_to_sign)) end # Within Multipart modul I have this: class MultipartPost BOUNDARY = 'tarsiers-rule0000' HEADER = {"Content-type" => "multipart/form-data, boundary=" + BOUNDARY + " "} def prepare_query (params) fp = [] params.each {|k,v| if v.respond_to?(:read) fp.push(FileParam.new(k, v.path, v.read)) else fp.push(Param.new(k,v)) end } query = fp.collect {|p| "--" + BOUNDARY + "\r\n" + p.to_multipart }.join("") + "--" + BOUNDARY + "--" return query, HEADER end end end Thanks for your help.

    Read the article

  • Sitecore E-Commerce Module - Discount/Promotional Codes

    - by Zachary Kniebel
    I am working on a project for which I must use Sitecore's E-Commerce Module (and Sitecore 6.5 rev. 120706 - aka 'Update 5') to create a web-store. One of the features that I am trying to implement is a generic promotional/discount code system - customer enters a code at checkout which grants a discount like 'free shipping', '20% off', etc. At the moment, I am looking for some guidance (a high-level solution, a few pseudo-ideas, some references to review, etc) as to how this can be accomplished. Summary: What I am looking for is a way to detect whether or not the user entered a promo code at a previous stage in the checkout line, and to determine what that promo code is, if they did. Progress Thus Far: I have thoroughly reviewed all of the Sitecore E-Commerce Services (SES) documentation, especially "SES Order Line Extension" documentation (which I believe will have to be modified/extended in order to accomplish this task). Additionally, I have thoroughly reviewed the Sitecore Community article Extending Sitecore E-Commerce - Pricing and believe that it may be a useful guide for applying a discount statically, but does not say much in the way of applying a discount dynamically. After reviewing these documents, I have come up with the following possible high-level solution to start from: I create a template to represent a promotional code, which holds all data relevant to the promotion (percent off, free shipping, code, etc). I then create another template (based on the Product Search Group template) that holds a link to an item within a global "Promotional Code" items folder. Next, I use the Product Search Group features of my new template to choose which products to apply the discount to. In the source code for the checkout I create a class that checks if a code has been entered and, if so, somehow carry it through the rest of the checkout process. This is where I get stuck. More Details: No using cookies No GET requests No changing/creating/deleting items in the Sitecore Database during the checkout process (e.g., no manipulation of fields of a discount item during checkout to signal that the discount has been applied) must stay within the scope of C# Last Notes: I will update this post with any more information that I find/progress that I make. I upgrade all answers that are relevant and detailed, thought-provoking, or otherwise useful to me and potentially useful to others, in addition to any high-level answers that serve as a feasible solution to this problem; even if your idea doesn't help me, if I think it will help someone else I will still upgrade it. Thanks, in advance, for all your help! :)

    Read the article

  • Large File Download - Connection With Server Reset

    - by daveywc
    I have an asp.net website that allows the user to download largish files - 30mb to about 60mb. Sometimes the download works fine but often it fails at some varying point before the download finishes with the message saying that the connection with the server was reset. Originally I was simply using Server.TransmitFile but after reading up a bit I am now using the code posted below. I am also setting the Server.ScriptTimeout value to 3600 in the Page_Init event. private void DownloadFile(string fname, bool forceDownload) { string path = MapPath(fname); string name = Path.GetFileName(path); string ext = Path.GetExtension(path); string type = ""; // set known types based on file extension if (ext != null) { switch (ext.ToLower()) { case ".mp3": type = "audio/mpeg"; break; case ".htm": case ".html": type = "text/HTML"; break; case ".txt": type = "text/plain"; break; case ".doc": case ".rtf": type = "Application/msword"; break; } } if (forceDownload) { Response.AppendHeader("content-disposition", "attachment; filename=" + name.Replace(" ", "_")); } if (type != "") { Response.ContentType = type; } else { Response.ContentType = "application/x-msdownload"; } System.IO.Stream iStream = null; // Buffer to read 10K bytes in chunk: byte[] buffer = new Byte[10000]; // Length of the file: int length; // Total bytes to read: long dataToRead; try { // Open the file. iStream = new System.IO.FileStream(path, System.IO.FileMode.Open, System.IO.FileAccess.Read, System.IO.FileShare.Read); // Total bytes to read: dataToRead = iStream.Length; //Response.ContentType = "application/octet-stream"; //Response.AddHeader("Content-Disposition", "attachment; filename=" + filename); // Read the bytes. while (dataToRead > 0) { // Verify that the client is connected. if (Response.IsClientConnected) { // Read the data in buffer. length = iStream.Read(buffer, 0, 10000); // Write the data to the current output stream. Response.OutputStream.Write(buffer, 0, length); // Flush the data to the HTML output. Response.Flush(); buffer = new Byte[10000]; dataToRead = dataToRead - length; } else { //prevent infinite loop if user disconnects dataToRead = -1; } } } catch (Exception ex) { // Trap the error, if any. Response.Write("Error : " + ex.Message); } finally { if (iStream != null) { //Close the file. iStream.Close(); } Response.Close(); } }

    Read the article

  • PHP miniwebsever file download

    - by snikolov
    $httpsock = @socket_create_listen("9090"); if (!$httpsock) { print "Socket creation failed!\n"; exit; } while (1) { $client = socket_accept($httpsock); $input = trim(socket_read ($client, 4096)); $input = explode(" ", $input); $input = $input[1]; $fileinfo = pathinfo($input); switch ($fileinfo['extension']) { default: $mime = "text/html"; } if ($input == "/") { $input = "index.html"; } $input = ".$input"; if (file_exists($input) && is_readable($input)) { echo "Serving $input\n"; $contents = file_get_contents($input); $output = "HTTP/1.0 200 OK\r\nServer: APatchyServer\r\nConnection: close\r\nContent-Type: $mime\r\n\r\n$contents"; } else { //$contents = "The file you requested doesn't exist. Sorry!"; //$output = "HTTP/1.0 404 OBJECT NOT FOUND\r\nServer: BabyHTTP\r\nConnection: close\r\nContent-Type: text/html\r\n\r\n$contents"; function openfile() { $filename = "a.pl"; $file = fopen($filename, 'r'); $filesize = filesize($filename); $buffer = fread($file, $filesize); $array = array("Output"=$buffer,"filesize"=$filesize,"filename"=$filename); return $array; } $send = openfile(); $file = $send['filename']; $filesize = $send['filesize']; $output = 'HTTP/1.0 200 OK\r\n'; $output .= "Content-type: application/octet-stream\r\n"; $output .= 'Content-Disposition: attachment; filename="'.$file.'"\r\n'; $output .= "Content-Length:$filesize\r\n"; $output .= "Accept-Ranges: bytes\r\n"; $output .= "Cache-Control: private\n\n"; $output .= $send['Output']; $output .= "Content-Transfer-Encoding: binary"; $output .= "Connection: Keep-Alive\r\n"; } socket_write($client, $output); socket_close ($client); } socket_close ($httpsock); Hello, I am snikolov i am creating a miniwebserver with php and i would like to know how i can send the client a file to download with his browser such as firefox or internet explore i am sending a file to the user to download via sockets, but the cleint is not getting the filename and the information to download can you please help me here,if i declare the file again i get this error in my server Fatal error: Cannot redeclare openfile() (previously declared in C:\User s\fsfdsf\sfdsfsdf\httpd.php:31) in C:\Users\hfghfgh\hfghg\httpd.php on li ne 29, if its possible, i would like to know if the webserver can show much banwdidth the user request via sockets, perl has the same option as php but its more hardcore than php i dont understand much about perl, i even saw that a miniwebserver can show much the client user pulls from the server would it be possible that you can assist me with this coding, i much aprreciate it thank you guys.

    Read the article

  • How to move a kinect skeleton to another position

    - by Ewerton
    I am working on a extension method to move one skeleton to a desired position in the kinect field os view. My code receives a skeleton to be moved and the destiny position, i calculate the distance between the received skeleton hip center and the destiny position to find how much to move, then a iterate in the joint applying this factor. My code, actualy looks like this. public static Skeleton MoveTo(this Skeleton skToBeMoved, Vector4 destiny) { Joint newJoint = new Joint(); ///Based on the HipCenter (i dont know if it is reliable, seems it is.) float howMuchMoveToX = Math.Abs(skToBeMoved.Joints[JointType.HipCenter].Position.X - destiny.X); float howMuchMoveToY = Math.Abs(skToBeMoved.Joints[JointType.HipCenter].Position.Y - destiny.Y); float howMuchMoveToZ = Math.Abs(skToBeMoved.Joints[JointType.HipCenter].Position.Z - destiny.Z); float howMuchToMultiply = 1; // Iterate in the 20 Joints foreach (JointType item in Enum.GetValues(typeof(JointType))) { newJoint = skToBeMoved.Joints[item]; // This adjust, try to keeps the skToBeMoved in the desired position if (newJoint.Position.X < 0) howMuchToMultiply = 1; // if the point is in a negative position, carry it to a "more positive" position else howMuchToMultiply = -1; // if the point is in a positive position, carry it to a "more negative" position // applying the new values to the joint SkeletonPoint pos = new SkeletonPoint() { X = newJoint.Position.X + (howMuchMoveToX * howMuchToMultiply), Y = newJoint.Position.Y, // * (float)whatToMultiplyY, Z = newJoint.Position.Z, // * (float)whatToMultiplyZ }; newJoint.Position = pos; skToBeMoved.Joints[item] = newJoint; //if (skToBeMoved.Joints[JointType.HipCenter].Position.X < 0) //{ // if (item == JointType.HandLeft) // { // if (skToBeMoved.Joints[item].Position.X > 0) // { // } // } //} } return skToBeMoved; } Actualy, only X position is considered. Now, THE PROBLEM: If i stand in a negative position, and move my hand to a positive position, a have a strange behavior, look this image To reproduce this behaviour you could use this code using (SkeletonFrame frame = e.OpenSkeletonFrame()) { if (frame == null) return new Skeleton(); if (skeletons == null || skeletons.Length != frame.SkeletonArrayLength) { skeletons = new Skeleton[frame.SkeletonArrayLength]; } frame.CopySkeletonDataTo(skeletons); Skeleton skeletonToTest = skeletons.Where(s => s.TrackingState == SkeletonTrackingState.Tracked).FirstOrDefault(); Vector4 newPosition = new Vector4(); newPosition.X = -0.03412333f; newPosition.Y = 0.0407479f; newPosition.Z = 1.927342f; newPosition.W = 0; // ignored skeletonToTest.MoveTo(newPosition); } I know, this is simple math, but i cant figure it out why this is happen. Any help will be apreciated.

    Read the article

  • How do I classify using SVM Classifier?

    - by Gomathi
    I'm doing a project in liver tumor classification. Actually I initially used Region Growing method for liver segmentation and from that I segmented tumor using FCM. I,then, obtained the texture features using Gray Level Co-occurence Matrix. My output for that was stats = autoc: [1.857855266614132e+000 1.857955341199538e+000] contr: [5.103143332457753e-002 5.030548650257343e-002] corrm: [9.512661919561399e-001 9.519459060378332e-001] corrp: [9.512661919561385e-001 9.519459060378338e-001] cprom: [7.885631654779597e+001 7.905268525471267e+001] Now how should I give this as an input to the SVM program. function [itr] = multisvm( T,C,tst ) %MULTISVM(2.0) classifies the class of given training vector according to the % given group and gives us result that which class it belongs. % We have also to input the testing matrix %Inputs: T=Training Matrix, C=Group, tst=Testing matrix %Outputs: itr=Resultant class(Group,USE ROW VECTOR MATRIX) to which tst set belongs %----------------------------------------------------------------------% % IMPORTANT: DON'T USE THIS PROGRAM FOR CLASS LESS THAN 3, % % OTHERWISE USE svmtrain,svmclassify DIRECTLY or % % add an else condition also for that case in this program. % % Modify required data to use Kernel Functions and Plot also% %----------------------------------------------------------------------% % Date:11-08-2011(DD-MM-YYYY) % % This function for multiclass Support Vector Machine is written by % ANAND MISHRA (Machine Vision Lab. CEERI, Pilani, India) % and this is free to use. email: [email protected] % Updated version 2.0 Date:14-10-2011(DD-MM-YYYY) u=unique(C); N=length(u); c4=[]; c3=[]; j=1; k=1; if(N>2) itr=1; classes=0; cond=max(C)-min(C); while((classes~=1)&&(itr<=length(u))&& size(C,2)>1 && cond>0) %This while loop is the multiclass SVM Trick c1=(C==u(itr)); newClass=c1; svmStruct = svmtrain(T,newClass); classes = svmclassify(svmStruct,tst); % This is the loop for Reduction of Training Set for i=1:size(newClass,2) if newClass(1,i)==0; c3(k,:)=T(i,:); k=k+1; end end T=c3; c3=[]; k=1; % This is the loop for reduction of group for i=1:size(newClass,2) if newClass(1,i)==0; c4(1,j)=C(1,i); j=j+1; end end C=c4; c4=[]; j=1; cond=max(C)-min(C); % Condition for avoiding group %to contain similar type of values %and the reduce them to process % This condition can select the particular value of iteration % base on classes if classes~=1 itr=itr+1; end end end end Kindly guide me. Images:

    Read the article

  • Clojure vars and Java static methods

    - by j-g-faustus
    I'm a few days into learning Clojure and are having some teething problems, so I'm asking for advice. I'm trying to store a Java class in a Clojure var and call its static methods, but it doesn't work. Example: user=> (. java.lang.reflect.Modifier isPrivate 1) false user=> (def jmod java.lang.reflect.Modifier) #'user/jmod user=> (. jmod isPrivate 1) java.lang.IllegalArgumentException: No matching method found: isPrivate for class java.lang.Class (NO_SOURCE_FILE:0) at clojure.lang.Compiler.eval(Compiler.java:4543) From the exception it looks like the runtime expects a var to hold an object, so it calls .getClass() to get the class and looks up the method using reflection. In this case the var already holds a class, so .getClass() returns java.lang.Class and the method lookup obviously fails. Is there some way around this, other than writing my own macro? In the general case I'd like to have either an object or a class in a varible and call the appropriate methods on it - duck typing for static methods as well as for instance methods. In this specific case I'd just like a shorter name for java.lang.reflect.Modifier, an alias if you wish. I know about import, but looking for something more general, like the Clojure namespace alias but for Java classes. Are there other mechanisms for doing this? Edit: Maybe I'm just confused about the calling conventions here. I thought the Lisp (and by extension Clojure) model was to evaluate all arguments and call the first element in the list as a function. In this case (= jmod java.lang.reflect.Modifier) returns true, and (.getName jmod) and (.getName java.lang.reflect.Modifier) both return the same string. So the variable and the class name clearly evaluate to the same thing, but they still cannot be called in the same fashion. What's going on here? Edit 2 Answering my second question (what is happening here), the Clojure doc says that If the first operand is a symbol that resolves to a class name, the access is considered to be to a static member of the named class... Otherwise it is presumed to be an instance member http://clojure.org/java_interop under "The Dot special form" "Resolving to a class name" is apparently not the same as "evaluating to something that resolves to a class name", so what I am trying to do here is something the dot special form does not support.

    Read the article

  • CMake: Mac OS X: ld: unknown option: -soname

    - by Alex Ivasyuv
    I try to build my app with CMake on Mac OS X, I get the following error: Linking CXX shared library libsml.so ld: unknown option: -soname collect2: ld returned 1 exit status make[2]: *** [libsml.so] Error 1 make[1]: *** [CMakeFiles/sml.dir/all] Error 2 make: *** [all] Error 2 This is strange, as Mac has .dylib extension instead of .so. There's my CMakeLists.txt: cmake_minimum_required(VERSION 2.6) PROJECT (SilentMedia) SET(SourcePath src/libsml) IF (DEFINED OSS) SET(OSS_src ${SourcePath}/Media/Audio/SoundSystem/OSS/DSP/DSP.cpp ${SourcePath}/Media/Audio/SoundSystem/OSS/Mixer/Mixer.cpp ) ENDIF(DEFINED OSS) IF (DEFINED ALSA) SET(ALSA_src ${SourcePath}/Media/Audio/SoundSystem/ALSA/DSP/DSP.cpp ${SourcePath}/Media/Audio/SoundSystem/ALSA/Mixer/Mixer.cpp ) ENDIF(DEFINED ALSA) SET(SilentMedia_src ${SourcePath}/Utils/Base64/Base64.cpp ${SourcePath}/Utils/String/String.cpp ${SourcePath}/Utils/Random/Random.cpp ${SourcePath}/Media/Container/FileLoader.cpp ${SourcePath}/Media/Container/OGG/OGG.cpp ${SourcePath}/Media/PlayList/XSPF/XSPF.cpp ${SourcePath}/Media/PlayList/XSPF/libXSPF.cpp ${SourcePath}/Media/PlayList/PlayList.cpp ${OSS_src} ${ALSA_src} ${SourcePath}/Media/Audio/Audio.cpp ${SourcePath}/Media/Audio/AudioInfo.cpp ${SourcePath}/Media/Audio/AudioProxy.cpp ${SourcePath}/Media/Audio/SoundSystem/SoundSystem.cpp ${SourcePath}/Media/Audio/SoundSystem/libao/AO.cpp ${SourcePath}/Media/Audio/Codec/WAV/WAV.cpp ${SourcePath}/Media/Audio/Codec/Vorbis/Vorbis.cpp ${SourcePath}/Media/Audio/Codec/WavPack/WavPack.cpp ${SourcePath}/Media/Audio/Codec/FLAC/FLAC.cpp ) SET(SilentMedia_LINKED_LIBRARY sml vorbisfile FLAC++ wavpack ao #asound boost_thread-mt boost_filesystem-mt xspf gtest ) INCLUDE_DIRECTORIES( /usr/include /usr/local/include /usr/include/c++/4.4 /Users/alex/Downloads/boost_1_45_0 ${SilentMedia_SOURCE_DIR}/src ${SilentMedia_SOURCE_DIR}/${SourcePath} ) #link_directories( # /usr/lib # /usr/local/lib # /Users/alex/Downloads/boost_1_45_0/stage/lib #) IF(LibraryType STREQUAL "static") ADD_LIBRARY(sml-static STATIC ${SilentMedia_src}) # rename library from libsml-static.a => libsml.a SET_TARGET_PROPERTIES(sml-static PROPERTIES OUTPUT_NAME "sml") SET_TARGET_PROPERTIES(sml-static PROPERTIES CLEAN_DIRECT_OUTPUT 1) ELSEIF(LibraryType STREQUAL "shared") ADD_LIBRARY(sml SHARED ${SilentMedia_src}) # change compile optimization/debug flags # -Werror -pedantic IF(BuildType STREQUAL "Debug") SET_TARGET_PROPERTIES(sml PROPERTIES COMPILE_FLAGS "-pipe -Wall -W -ggdb") ELSEIF(BuildType STREQUAL "Release") SET_TARGET_PROPERTIES(sml PROPERTIES COMPILE_FLAGS "-pipe -Wall -W -O3 -fomit-frame-pointer") ENDIF() SET_TARGET_PROPERTIES(sml PROPERTIES CLEAN_DIRECT_OUTPUT 1) ENDIF() ### TEST ### IF(Test STREQUAL "true") ADD_EXECUTABLE (bin/TestXSPF ${SourcePath}/Test/Media/PlayLists/XSPF/TestXSPF.cpp) TARGET_LINK_LIBRARIES (bin/TestXSPF ${SilentMedia_LINKED_LIBRARY}) ADD_EXECUTABLE (bin/test1 ${SourcePath}/Test/test.cpp) TARGET_LINK_LIBRARIES (bin/test1 ${SilentMedia_LINKED_LIBRARY}) ADD_EXECUTABLE (bin/TestFileLoader ${SourcePath}/Test/Media/Container/FileLoader/TestFileLoader.cpp) TARGET_LINK_LIBRARIES (bin/TestFileLoader ${SilentMedia_LINKED_LIBRARY}) ADD_EXECUTABLE (bin/testMixer ${SourcePath}/Test/testMixer.cpp) TARGET_LINK_LIBRARIES (bin/testMixer ${SilentMedia_LINKED_LIBRARY}) ENDIF (Test STREQUAL "true") ### TEST ### ADD_CUSTOM_TARGET(doc COMMAND doxygen ${SilentMedia_SOURCE_DIR}/doc/Doxyfile) There was no error on Linux. Build process: cmake -D BuildType=Debug -D LibraryType=shared . make I found, that incorrect command generate in CMakeFiles/sml.dir/link.txt. But why, as the goal of CMake is cross-platforming.. How to fix it?

    Read the article

  • How to perform add/update of a model object that contains EntitySet

    - by David Liddle
    I have a similar concept to the SO questions/tags scenario however am trying to decide the best way of implementation. Tables Questions, QuestionTags and Tags Questions QuestionTags Tags --------- ------------ ---- QID QID TID QName TID TName When adding/updating a question I have 2 textboxes. The important part is a single textbox that allows users to enter in multiple Tags separated by spaces. I am using Linq2Sql so the Questions model has an EntitySet of QuestionTags with then link to Tags. My question is regarding the adding/updating of Questions (part 1), and also how to best show QuestionTags for a Question (part 2). Part 1 Before performing an add/update, my service layer needs to deal with 3 scenarios before passing to their respective repositories. Insert Tags that do not already exist Insert/Update Question Insert QuestionTags - when updating need to remove existing QuestionTags Here is my code below however started to get into a bit of a muddle. I've created extension methods on my repositories to get Tags WithNames etc. public void Add(Question q, string tags) { var tagList = tags.Split(new string[] { " " }, StringSplitOptions.RemoveEmptyEntries).ToList(); using (DB.TransactionScope ts = new DB.TransactionScope()) { var existingTags = TagsRepository.Get() .WithName(tagList) .ToList(); var newTags = (from t in tagList select new Tag { TName = t }).Except(existingTags, new TagsComparer()).ToList(); TagsRepository.Add(newTags); //need to insert QuestionTags QuestionsRepository.Add(q); ts.Complete(); } } Part 2 My second question is, when displaying a list of Questions how is it best to show their QuestionTags? For example, I have an Index view that shows a list of Questions in a table. One of the columns shows an image and when the user hovers over it shows the list of Tags. My current implementation is to create a custom ViewModel and show a List of QuestionIndexViewModel in the View. QuestionIndexViewModel { Question Question { get; set; } string Tags { get; set; } } However, this seems a bit clumsy and quite a few DB calls. public ViewResult Index() { var model= new List<QuestionIndexViewModel>(); //make a call to get a list of questions //foreach question make a call to get their QuestionTags, //to be able to get their Tag names and then join them //to form a single string. return View(model); } Also, just for test purposes using SQL Profiler, I decided to iterate through the QuestionTags entity set of a Question in my ViewModel however nothing was picked up in Profiler? What would be the reason for this?

    Read the article

  • jquery 1.4.1 breaks my slideshow

    - by JMC Creative
    After toying with the jquery slideshow extension, I created my own that better suited my purposes ( I didn't like that all the images needed to load at the beginning for instance). Now, upon upgrading to jQuery 1.4.2 (I know I'm late), the slideshow loads the first image fine ( from the line$('div#slideshow img#ssone').fadeIn(1500); towards the bottom), but doesn't do anything beyond that. Does anyone have any idea which jquery construct is killing my script? The live page is at lplonline.org which is using 1.3.2 for the time being. Thanks in advance. Array.prototype.random = function( r ) { var i = 0, l = this.length; if( !r ) { r = this.length; } else if( r > 0 ) { r = r % l; } else { i = r; r = l + r % l; } return this[ Math.floor( r * Math.random() - i ) ]; }; jQuery(function($){ var imgArr = new Array(); imgArr[1] = "wp-content/uploads/rotator/Brbrshop4-hrmnywkshp72006.jpg"; imgArr[2] = "wp-content/uploads/rotator/IMGA0125.JPG"; //etc, etc, about 30 of these are created dynamically from a db function randImgs () { var randImg = imgArr.random(); var img1 = $('div#slideshow img#ssone'); var img2 = $('div#slideshow img#sstwo'); if(img1.is(':visible') ) { img2.fadeIn(1500); img1.fadeOut(1500,function() { img1.attr({src : randImg}); }); } else { img1.fadeIn(1500); img2.fadeOut(1500,function() { img2.attr({src : randImg}); }); } } setInterval(randImgs,9000); // 9 SECONDS $('div#slideshow img#ssone').fadeIn(1500); }); </script> <div id="slideshow"> <img id="ssone" style="display:none;" src="wp-content/uploads/rotator/quote-investments.png" alt="" /> <img id="sstwo" style="display:none;" src="wp-content/uploads/rotator/quote-drugs.png" alt="" /> </div>

    Read the article

  • Implementing coroutines in Java

    - by JUST MY correct OPINION
    This question is related to my question on existing coroutine implementations in Java. If, as I suspect, it turns out that there is no full implementation of coroutines currently available in Java, what would be required to implement them? As I said in that question, I know about the following: You can implement "coroutines" as threads/thread pools behind the scenes. You can do tricksy things with JVM bytecode behind the scenes to make coroutines possible. The so-called "Da Vinci Machine" JVM implementation has primitives that make coroutines doable without bytecode manipulation. There are various JNI-based approaches to coroutines also possible. I'll address each one's deficiencies in turn. Thread-based coroutines This "solution" is pathological. The whole point of coroutines is to avoid the overhead of threading, locking, kernel scheduling, etc. Coroutines are supposed to be light and fast and to execute only in user space. Implementing them in terms of full-tilt threads with tight restrictions gets rid of all the advantages. JVM bytecode manipulation This solution is more practical, albeit a bit difficult to pull off. This is roughly the same as jumping down into assembly language for coroutine libraries in C (which is how many of them work) with the advantage that you have only one architecture to worry about and get right. It also ties you down to only running your code on fully-compliant JVM stacks (which means, for example, no Android) unless you can find a way to do the same thing on the non-compliant stack. If you do find a way to do this, however, you have now doubled your system complexity and testing needs. The Da Vinci Machine The Da Vinci Machine is cool for experimentation, but since it is not a standard JVM its features aren't going to be available everywhere. Indeed I suspect most production environments would specifically forbid the use of the Da Vinci Machine. Thus I could use this to make cool experiments but not for any code I expect to release to the real world. This also has the added problem similar to the JVM bytecode manipulation solution above: won't work on alternative stacks (like Android's). JNI implementation This solution renders the point of doing this in Java at all moot. Each combination of CPU and operating system requires independent testing and each is a point of potentially frustrating subtle failure. Alternatively, of course, I could tie myself down to one platform entirely but this, too, makes the point of doing things in Java entirely moot. So... Is there any way to implement coroutines in Java without using one of these four techniques? Or will I be forced to use the one of those four that smells the least (JVM manipulation) instead?

    Read the article

  • Runtime.exec causes duplicate JVM to hang indefinitely until killed (Solaris 10)

    - by John
    All, We are running a J2EE application on WebLogic server 9.2 MP2 with a jrockit 64-bit JVM (27.3.1) on Solaris 10. We call use runtime.exec to call an executable called jfmerge to create PDF documents. We have found that in Solaris, when runtime.exec is called, a duplicate JVM is temporarily spawned to kick off the jfmerge process. While this is inefficient (our JVM is 5 GB, thus the duplicated shell JVM is also 5 GB), the major problem lies in the fact that when there is heavy load on this functionality (PDF generation) in our application, sometimes the duplicated JVM never exits. When the JVM hangs, the servers create large issues (extreme application slowness and terminated user sessions) as the entire duplicate JVM get's all of its 5 GB of process size written to disk swap. We have noted the following hung thread correlated with a hung JVM process until the process is manually killed: "[STUCK] ExecuteThread: '17' for queue: 'weblogic.kernel.Default (self-tuning)'" id=3463 idx=0x158 tid=3460 prio=1 alive, in native, daemon at jrockit/io/FileNativeIO.readBytesPinned(Ljava/io/FileDescriptor;[BII)I(Native Method) at jrockit/io/FileNativeIO.readBytes(FileNativeIO.java:30) at java/io/FileInputStream.readBytes([BII)I(FileInputStream.java) at java/io/FileInputStream.read(FileInputStream.java:194) at java/lang/UNIXProcess$DeferredCloseInputStream.read(UNIXProcess.java:227) at java/io/BufferedInputStream.fill(BufferedInputStream.java:218) at java/io/BufferedInputStream.read(BufferedInputStream.java:235) ^-- Holding lock: java/io/BufferedInputStream@0xfffffffec6510470[thin lock] at gov/v3/common/formgeneration/sessionbean/FormsBean.getProcessStatus(FormsBean.java:809) at gov/v3/common/formgeneration/sessionbean/FormsBean.createPDF(FormsBean.java:750) at gov/v3/common/formgeneration/sessionbean/FormsBean.getTemplateDetails(FormsBean.java:450) at gov/v3/common/formgeneration/sessionbean/FormsBean.generateSinglePDF(FormsBean.java:1371) at gov/v3/common/formgeneration/sessionbean/FormsBean.generatePDF(FormsBean.java:263) at gov/v3/common/formgeneration/sessionbean/FormsBean.endorseDocument(FormsBean.java:2377) at gov/v3/common/formgeneration/sessionbean/Forms_qaco28_EOImpl.endorseDocument(Forms_qaco28_EOImpl.java:214) at gov/v3/delegates/common/FormsAndNoticesDelegate.endorseDocument(FormsAndNoticesDelegate.java:128) at gov/v3/actions/common/EndorseDocumentAction.executeRequest(EndorseDocumentAction.java:68) at gov/v3/fwk/controller/struts/action/V3CommonDispatchAction.dispatchToExecuteMethod(V3CommonDispatchAction.java:532) at gov/v3/fwk/controller/struts/action/V3CommonDispatchAction.executeBaseAction(V3CommonDispatchAction.java:336) at gov/v3/fwk/controller/struts/action/V3BaseDispatchAction.execute(V3BaseDispatchAction.java:69) at org/apache/struts/action/RequestProcessor.processActionPerform(RequestProcessor.java:484) at gov/v3/fwk/controller/struts/requestprocessor/V3TilesRequestProcessor.processActionPerform(V3TilesRequestProcessor.java:384) at org/apache/struts/action/RequestProcessor.process(RequestProcessor.java:274) at org/apache/struts/action/ActionServlet.process(ActionServlet.java:1482) at org/apache/struts/action/ActionServlet.doGet(ActionServlet.java:507) at gov/v3/fwk/controller/struts/servlet/V3ControllerServlet.doGet(V3ControllerServlet.java:110) at javax/servlet/http/HttpServlet.service(HttpServlet.java:743) at javax/servlet/http/HttpServlet.service(HttpServlet.java:856) at weblogic/servlet/internal/StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227) at weblogic/servlet/internal/StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125) at weblogic/servlet/internal/ServletStubImpl.execute(ServletStubImpl.java:283) at weblogic/servlet/internal/ServletStubImpl.execute(ServletStubImpl.java:175) at weblogic/servlet/internal/WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3231) at weblogic/security/acl/internal/AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) at weblogic/security/service/SecurityManager.runAs(SecurityManager.java:121) at weblogic/servlet/internal/WebAppServletContext.securedExecute(WebAppServletContext.java:2002) at weblogic/servlet/internal/WebAppServletContext.execute(WebAppServletContext.java:1908) at weblogic/servlet/internal/ServletRequestImpl.run(ServletRequestImpl.java:1362) at weblogic/work/ExecuteThread.execute(ExecuteThread.java:209) at weblogic/work/ExecuteThread.run(ExecuteThread.java:181) at jrockit/vm/RNI.c2java(JJJJJ)V(Native Method) -- end of trace We would like to do a couple of things: 1.) Prevent the spawning of a duplicate JVM, as we do not need any of it's functions when executing the simple jfmerge executable, and it creates massive overhead. 2.) In the short term at least prevent this duplicate JVM from handing indefinitely.

    Read the article

  • Is "Systems Designer" the job title that best describes what I do? [closed]

    - by ivo-rossi
    After having worked as Java developer for almost 3 years in the same company that I currently work at, I moved to a new position associated with the development of the same application. I’m in this new position for more than 1 year now. My official job title is Systems Designer, but I’m not sure this is a title that expresses well what I do. So my question here is what would be the most appropriate job title for me? I see this question as important for my career development. After all, I should be able to explain in one word what I do. And it’s no longer “Java Developer”. Well, in more than one word, this is what I do: The business analysts gather requirements / business problems to be solved with the clients and then discuss these requirements with me. Given the requirements, I design the high level solutions to be implemented in our system (e.g. a new screen on the client application, modifications to existing reports, extension to the XML export format of some objects, etc). I base my decision on the current capabilities of the system, the overall impact that the solutions would have on the system and the estimated effort to implement them (as I was a developer of this same application for almost 3 years before I moved to this position, I’m confident in my estimates). The solutions are discussed iteratively with the business analysts until we agree that they are good. The outcome of this analysis is what we call the “requirements design” document, which is written by me, shared with clients for approval and then also with the team that is going to implement the solutions and test them. Note that there are a few problems that I need to find a solution for that are non-functional. If the users are unhappy with the performance of a certain tool, I will investigate what can be done to speed it up. I will do some research – often based in the Java code itself - to identify possibilities of optimizations. But in this new position I no longer code, the main outcome of my work is really the “requirements design”. Is “Systems Designer” really the most appropriate job title?

    Read the article

  • Custom onsynctopreference for XUL textbox

    - by Alexey Romanov
    I wanted to enable custom shortcuts in my Firefox extension. The idea is that the user just focuses on a textbox, presses key combination, and it's shown in the textbox and saved to a preference. However, I couldn't get it to work. With this XUL <?xml version="1.0"?> <?xml-stylesheet href="chrome://global/skin/" type="text/css"?> <?xml-stylesheet href="chrome://mozapps/skin/pref/pref.css" type="text/css"?> <!DOCTYPE window SYSTEM "chrome://nextplease/locale/nextplease.dtd"> <prefwindow id="nextpleaseprefs" title="&options.title;" buttons="accept, cancel" xmlns="http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul"> <prefpane id="nextplease.general" label="&options.general.title;" image="chrome://nextplease/skin/Sound Mixer.png"> <preferences> <preference id="nextkey" name="nextplease.nextkey" type="int"/> </preferences> <vbox flex="1"> <hbox align="center"> <label value="&options.general.nextKey;" /> <textbox id="nextkey" flex="1" editable="false" onkeyup="return nextplease.handleKeySelection(this, event);" preference-editable="true" preference="nextkey" onsynctopreference="alert('syncing'); return nextplease.syncKeySelector(this);"/> </hbox> </vbox> </prefpane> <script type="application/x-javascript" src="chrome://nextplease/content/nextpleaseCommon.js" /> <script type="application/x-javascript" src="chrome://nextplease/content/nextpleaseOptions.js" /> </prefwindow> the event in onkeyup works. But when I click the OK button, I don't see a "syncing" alert. Why isn't onsynctopreference working? Is it impossible to have custom onsynctopreference attribute for a textbox?

    Read the article

  • Problem updating through LINQtoSQL in MVC application using StructureMap, Repository Pattern and UoW

    - by matt
    I have an ASP MVC application using LINQ to SQL for data access. I am trying to use the Repository and Unit of Work patterns, with a service layer consuming the repositories and unit of work. I am experiencing a problem when attempting to perform updates on a particular repository. My application architecture is as follows: My service class: public class MyService { private IRepositoryA _RepositoryA; private IRepositoryB _RepositoryB; private IUnitOfWork _unitOfWork; public MyService(IRepositoryA ARepositoryA, IRepositoryB ARepositoryB, IUnitOfWork AUnitOfWork) { _unitOfWork = AUnitOfWork; _RepositoryA = ARepositoryA; _RepositoryB = ARepositoryB; } public PerformActionOnObject(Guid AID) { MyObject obj = _RepositoryA.GetRecords() .WithID(AID); obj.SomeProperty = "Changed to new value"; _RepositoryA.UpdateRecord(obj); _unitOfWork.Save(); } } Repository interface: public interface IRepositoryA { IQueryable<MyObject> GetRecords(); UpdateRecord(MyObject obj); } Repository LINQtoSQL implementation: public class LINQtoSQLRepositoryA : IRepositoryA { private MyDataContext _DBContext; public LINQtoSQLRepositoryA(IUnitOfWork AUnitOfWork) { _DBConext = AUnitOfWork as MyDataContext; } public IQueryable<MyObject> GetRecords() { return from records in _DBContext.MyTable select new MyObject { ID = records.ID, SomeProperty = records.SomeProperty } } public bool UpdateRecord(MyObject AObj) { MyTableRecord record = (from u in _DB.MyTable where u.ID == AObj.ID select u).SingleOrDefault(); if (record == null) { return false; } record.SomeProperty = AObj.SomePropery; return true; } } Unit of work interface: public interface IUnitOfWork { void Save(); } Unit of work implemented in data context extension. public partial class MyDataContext : DataContext, IUnitOfWork { public void Save() { SubmitChanges(); } } StructureMap registry: public class DataServiceRegistry : Registry { public DataServiceRegistry() { // Unit of work For<IUnitOfWork>() .HttpContextScoped() .TheDefault.Is.ConstructedBy(() => new MyDataContext()); // RepositoryA For<IRepositoryA>() .Singleton() .Use<LINQtoSQLRepositoryA>(); // RepositoryB For<IRepositoryB>() .Singleton() .Use<LINQtoSQLRepositoryB>(); } } My problem is that when I call PerformActionOnObject on my service object, the update never fires any SQL. I think this is because the datacontext in the UnitofWork object is different to the one in RepositoryA where the data is changed. So when the service calls Save() on it's IUnitOfWork, the underlying datacontext does not have any updated data so no update SQL is fired. Is there something I've done wrong in the StrutureMap registry setup? Or is there a more fundamental problem with the design? Many thanks.

    Read the article

  • [c++] upload image to imageshack

    - by cinek1lol
    Hi! I would like to send pictures via a program written in C + +. - OK WinExec("C:\\curl\\curl.exe -H Expect: -F \"fileupload=@C:\\curl\\ok.jpg\" -F \"xml=yes\" -# \"http://www.imageshack.us/index.php\" -o data.txt -A \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.1) Gecko/20061204 Firefox/2.0.0.1\" -e \"http://www.imageshack.us\"", NULL); It works, but I would like to send the pictures from pre-loaded carrier to a variable char (you know what I mean? First off, I load the pictures into a variable and then send the variable), cause now I have to specify the path of the picture on a disk. I wanted to write this program in c++ by using the curl library, not through exe. extension. I have also found such a program (which has been modified by me a bit) #include <stdio.h> #include <string.h> #include <iostream> #include <curl/curl.h> #include <curl/types.h> #include <curl/easy.h> int main(int argc, char *argv[]) { CURL *curl; CURLcode res; struct curl_httppost *formpost=NULL; struct curl_httppost *lastptr=NULL; struct curl_slist *headerlist=NULL; static const char buf[] = "Expect:"; curl_global_init(CURL_GLOBAL_ALL); /* Fill in the file upload field */ curl_formadd(&formpost, &lastptr, CURLFORM_COPYNAME, "send", CURLFORM_FILE, "nowy.jpg", CURLFORM_END); curl_formadd(&formpost, &lastptr, CURLFORM_COPYNAME, "nowy.jpg", CURLFORM_COPYCONTENTS, "nowy.jpg", CURLFORM_END); curl_formadd(&formpost, &lastptr, CURLFORM_COPYNAME, "submit", CURLFORM_COPYCONTENTS, "send", CURLFORM_END); curl = curl_easy_init(); headerlist = curl_slist_append(headerlist, buf); if(curl) { curl_easy_setopt(curl, CURLOPT_URL, "http://www.imageshack.us/index.php"); if ( (argc == 2) && (!strcmp(argv[1], "xml=yes")) ) curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headerlist); curl_easy_setopt(curl, CURLOPT_HTTPPOST, formpost); res = curl_easy_perform(curl); curl_easy_cleanup(curl); curl_formfree(formpost); curl_slist_free_all (headerlist); } system("pause"); return 0; }

    Read the article

  • Wordpress installed in root folder, subdomain now not working, GoDaddy host

    - by Kristin
    Hi, please forgive me for being a complete beginner at this, I'd rather not have to try to deal with this myself but as GoDaddy support have not replied after 2 days I'm going to have to. I think my problem is the same as the one above, but I'm not 100% sure, so I'm reposting it, I'm not really confident enough to attempt to try the fixes I've seen here so I need someone to give me baby instructions? Our original website (www.mwpics.com.au) was built in Dreamweaver etc, recently we created a new website in Wordpress, in a subdomain, then migrated it over to the root folder where it is now operating fine. I also moved the files for the old website into another directory which I called 'old', so they're all still there. The problem is that I have a subdomain set up - which is still showing as set up in the control panel on godaddy the url is www.mwpics.com.au/clients and it is at www.clients.mwpics.com.au. This directory contains loads of other directories, each of which is password protected by .htaccess files and which our clients access directly (not through the site) to download their finished work. The test one and the one for random clients is www.mwpics.com.au/clients/temp - username and password both temp (the usernames are all the same as the directory names). Since the WP install to the root directory the /clients extension no longer works (it should bring up an information page which is an .html index page in the directory) and the /clients/name extensions no longer works - it goes back to the wp site with a 'not found' error message. Strangely it does bring up the box for the username and password, but when you enter it it just goes back to the 'not found' message. Someone told me it was the .htaccess file - so as an experiment, I renamed the .htaccess file in the root directory and then copied the .htaccess file from the old root files into the root directory, eureka! It worked - and also the WP site opened to the home page... but bummer - the /pages in the WP site now no longer worked! But at least I know the source of the problem. So I switched it back and this is the status quo - I have no idea how to fix this, and with everyone back at work tomorrow, clients are going to want to start downloading their stuff... Can anyone help me? I'm starting to panic a bit

    Read the article

  • UCA + Natural Sorting

    - by Alix Axel
    I recently learnt that PHP already supports the Unicode Collation Algorithm via the intl extension: $array = array ( 'al', 'be', 'Alpha', 'Beta', 'Álpha', 'Àlpha', 'Älpha', '????', 'img10.png', 'img12.png', 'img1.png', 'img2.png', ); if (extension_loaded('intl') === true) { collator_asort(collator_create('root'), $array); } Array ( [0] => al [2] => Alpha [4] => Álpha [5] => Àlpha [6] => Älpha [1] => be [3] => Beta [11] => img1.png [9] => img10.png [8] => img12.png [10] => img2.png [7] => ???? ) As you can see this seems to work perfectly, even with mixed case strings! The only drawback I've encountered so far is that there is no support for natural sorting and I'm wondering what would be the best way to work around that, so that I can merge the best of the two worlds. I've tried to specify the Collator::SORT_NUMERIC sort flag but the result is way messier: collator_asort(collator_create('root'), $array, Collator::SORT_NUMERIC); Array ( [8] => img12.png [7] => ???? [9] => img10.png [10] => img2.png [11] => img1.png [6] => Älpha [5] => Àlpha [1] => be [2] => Alpha [3] => Beta [4] => Álpha [0] => al ) However, if I run the same test with only the img*.png values I get the ideal output: Array ( [3] => img1.png [2] => img2.png [1] => img10.png [0] => img12.png ) Can anyone think of a way to preserve the Unicode sorting while adding natural sorting capabilities?

    Read the article

  • using a Singleton to pass credentials in a multi-tenant application a code smell?

    - by Hans Gruber
    Currently working on a multi-tenant application that employs Shared DB/Shared Schema approach. IOW, we enforce tenant data segregation by defining a TenantID column on all tables. By convention, all SQL reads/writes must include a Where TenantID = '?' clause. Not an ideal solution, but hindsight is 20/20. Anyway, since virtually every page/workflow in our app must display tenant specific data, I made the (poor) decision at the project's outset to employ a Singleton to encapsulate the current user credentials (i.e. TenantID and UserID). My thinking at the time was that I didn't want to add a TenantID parameter to each and every method signature in my Data layer. Here's what the basic pseudo-code looks like: public class UserIdentity { public UserIdentity(int tenantID, int userID) { TenantID = tenantID; UserID = userID; } public int TenantID { get; private set; } public int UserID { get; private set; } } public class AuthenticationModule : IHttpModule { public void Init(HttpApplication context) { context.AuthenticateRequest += new EventHandler(context_AuthenticateRequest); } private void context_AuthenticateRequest(object sender, EventArgs e) { var userIdentity = _authenticationService.AuthenticateUser(sender); if (userIdentity == null) { //authentication failed, so redirect to login page, etc } else { //put the userIdentity into the HttpContext object so that //its only valid for the lifetime of a single request HttpContext.Current.Items["UserIdentity"] = userIdentity; } } } public static class CurrentUser { public static UserIdentity Instance { get { return HttpContext.Current.Items["UserIdentity"]; } } } public class WidgetRepository: IWidgetRepository{ public IEnumerable<Widget> ListWidgets(){ var tenantId = CurrentUser.Instance.TenantID; //call sproc with tenantId parameter } } As you can see, there are several code smells here. This is a singleton, so it's already not unit test friendly. On top of that you have a very tight-coupling between CurrentUser and the HttpContext object. By extension, this also means that I have a reference to System.Web in my Data layer (shudder). I want to pay down some technical debt this sprint by getting rid of this singleton for the reasons mentioned above. I have a few thoughts on what an better implementation might be, but if anyone has any guidance or lessons learned they could share, I would be much obliged.

    Read the article

  • how to register the app to open the pdf file in my app in ipad

    - by uttam
    i want to open the pdf file in my app from pdf page, but i am not getting any option of opening the pdf in my app. this my info.plist file <key>CFBundleDevelopmentRegion</key> <string>English</string> <key>CFBundleDocumentTypes</key> <array> <dict> <key>CFBundleTypeName</key> <string>PDF</string> <key>CFBundleTypeRole</key> <string>Viewer</string> <key>CFBundleTypeIconFiles</key> <string>Icon.png</string> <key>LSItemContentTypes</key> <string>com.neosofttech.pdf</string> <key>LSHandlerRank</key> <string>Owner</string> </dict> </array> <key>UTExportedTypeDeclarations</key> <array> <dict> <key>UTTypeConformsTo</key> <array> <string>public.pdf</string> </array> <key>UTTypeDescription</key> <string>PDFReader File</string> <key>UTTypeIdentifier</key> <string>com.neosofttech.pdf</string> <key>UTTypeTagSpecification</key> <dict> <key>public.filename-extension</key> <string>pdf</string> </dict> </dict> pls tell me where i am wrong in this, how can i open the pdf file in my app.

    Read the article

  • Marshalling non-Blittable Structs from C# to C++

    - by Greggo
    I'm in the process of rewriting an overengineered and unmaintainable chunk of my company's library code that interfaces between C# and C++. I've started looking into P/Invoke, but it seems like there's not much in the way of accessible help. We're passing a struct that contains various parameters and settings down to unmanaged codes, so we're defining identical structs. We don't need to change any of those parameters on the C++ side, but we do need to access them after the P/Invoked function has returned. My questions are: What is the best way to pass strings? Some are short (device id's which can be set by us), and some are file paths (which may contain Asian characters) Should I pass an IntPtr to the C# struct or should I just let the Marshaller take care of it by putting the struct type in the function signature? Should I be worried about any non-pointer datatypes like bools or enums (in other, related structs)? We have the treat warnings as errors flag set in C++ so we can't use the Microsoft extension for enums to force a datatype. Is P/Invoke actually the way to go? There was some Microsoft documentation about Implicit P/Invoke that said it was more type-safe and performant. For reference, here is one of the pairs of structs I've written so far: C++ /** Struct used for marshalling Scan parameters from managed to unmanaged code. */ struct ScanParameters { LPSTR deviceID; LPSTR spdClock; LPSTR spdStartTrigger; double spinRpm; double startRadius; double endRadius; double trackSpacing; UINT64 numTracks; UINT32 nominalSampleCount; double gainLimit; double sampleRate; double scanHeight; LPWSTR qmoPath; //includes filename LPWSTR qzpPath; //includes filename }; C# /// <summary> /// Struct used for marshalling scan parameters between managed and unmanaged code. /// </summary> [StructLayout(LayoutKind.Sequential)] public struct ScanParameters { [MarshalAs(UnmanagedType.LPStr)] public string deviceID; [MarshalAs(UnmanagedType.LPStr)] public string spdClock; [MarshalAs(UnmanagedType.LPStr)] public string spdStartTrigger; public Double spinRpm; public Double startRadius; public Double endRadius; public Double trackSpacing; public UInt64 numTracks; public UInt32 nominalSampleCount; public Double gainLimit; public Double sampleRate; public Double scanHeight; [MarshalAs(UnmanagedType.LPWStr)] public string qmoPath; [MarshalAs(UnmanagedType.LPWStr)] public string qzpPath; }

    Read the article

< Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >