Search Results

Search found 109764 results on 4391 pages for 'good code'.

Page 345/4391 | < Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >

  • What causes a JRE 6 JVM code cache leak?

    - by Arturo Knight
    Since switching to JRE 6, my server's code cache usage (non-heap) keeps growing indefinitely. My application creates a lot of classes at runtime, BUT these classes are successfully unloaded during the GC process. I can see these classes getting unloaded in the gc logs and also the permGen usage stays constant. I specifically make sure in my code that these classes are orphaned once I am finished with them and so they correctly get garbage collected from permGen. The code cache however keeps growing. I only became aware of the code cache after switching to JRE 6. So I guess my questions are: Does GC include the code cache? What could cause a code cache memory leak, specifically. Is there a bug in JDK 6 in this area?

    Read the article

  • How do I use test Perl modules from test Perl scripts?

    - by DVK
    If my Perl code has a production code location and "test" code location (e.g. production Perl code us in /usr/code/scripts, test Perl code is in /usr/code/test/scripts; production Perl libraries are in /usr/code/lib/perl and test versions of those libraries are in /usr/code/test/lib/perl, is there an easy way for me to achieve such a setup? The exact requirements are: The code must be THE SAME in production and test location. To clarify, to promote any code (library or script) from test to production, the ONLY thing which needs to happen is literally issuing cp command from test to prod location - both the file name AND file contents must remain identical. Test versions of scripts must call other test scripts and test libraries (if exist) or production libraries (if test libraries do not exist) The code paths must be the same between test and production with the exception of base directory (/usr/code/ vs /usr/code/test/) I will present how we solved the problem as an answer to this question, but I'd like to know if there's a better way.

    Read the article

  • is it legal/ethical to use source code provided in academic papers, or talks given at trade events l

    - by lucid
    so, is it legal to use source code from papers and such: like this paper on perlin noise: [url]http://mrl.nyu.edu/~perlin/paper445.pdf[/url] links to this source code: [url]http://mrl.nyu.edu/~perlin/noise/[/url] and stam's famous talk on fluid dynamics, includes source code throughout, annotated with instructions like "add these macros to the beginning of your code" [url]http://www.dgp.toronto.edu/people/stam/reality/Research/pdf/GDC03.pdf[/url] I'm just not sure if it's legal to copy and paste this to use in your own commercial code. if I were to make my own implementation, it would end up being close to identical, since I'd probably use the source code as a reference. I know very little about copyright law, including how it applies in these situations, and I can never find usage and licensing terms for these. Nor did googling any terms I could think of provide me the specific answer I need. does anyone know for sure what the rules/laws are here, or where I can find the answer?

    Read the article

  • How do I use test/beta Perl modules from test Perl scripts?

    - by DVK
    If my Perl code has a production code location and "beta" code location (e.g. production Perl code us in /usr/code/scripts, BETA Perl code is in /usr/code/beta/scripts; production Perl libraries are in /usr/code/lib/perl and BETA versions of those libraries are in /usr/code/beta/lib/perl, is there an easy way for me to achieve such a setup? The exact requirements are: The code must be THE SAME in production and BETA location. To clarify, to promote any code (library or script) from BETA to production, the ONLY thing which needs to happen is literally issuing cp command from BETA to prod location - both the file name AND file contents must remain identical. BETA versions of scripts must call other BETA scripts and BETA libraries (if exist) or production libraries (if BETA libraries do not exist) The code paths must be the same between BETA and production with the exception of base directory (/usr/code/ vs /usr/code/beta/) I will present how we solved the problem as an answer to this question, but I'd like to know if there's a better way.

    Read the article

  • What are the default values for arch and code options when using nvcc?

    - by Auron
    When compiling your CUDA code, you have to select for which architecture your code is being generated. nvcc provides two parameters to specify this architecture, basically: arch specifies the virtual arquictecture, which can be compute_10, compute_11, etc. code specifies the real architecture, which can be sm_10, sm_11, etc. So a command like this: nvcc x.cu -arch=compute_13 -code=sm_13 Will generate 'cubin' code for devices with 1.3 compute capability. Please correct me if I'm wrong. Which I would like to know is which are the default values for these two parameters? Which is the default architecture that nvcc uses when no value for arch or code is specified?

    Read the article

  • Rendering javascript at the server side level. A good or bad idea?

    - by davidhong
    I want to make it clear first: This isn't a question in relation to server-side Javascript or running Javascript server side. This is a question regarding rendering of Javascript code (which will be executed on the client-side) from server-side code. Having said that, take a look at below ASP.net code for example: hlRemoveCategory.Attributes.Add("onclick", "return confirm('Are you sure you want to delete this?');") This is prescribing the client-side onclick event on the server-side. As oppose to: $('a[rel=remove]').bind('click', function(event) { return confirm('Are you sure you want to delete this?'); } Now the question I want to ask is: What is the benefit of rendering javascript from the server-side code? Or the vice-versa? I personally prefer the second way of hooking up client-side UI/behaviour to HTML elements for the following reasons: Server-side does what ever it needs to already, including data-validation, event delegation and etc; and What server-side sees as an event is not necessarily the same process on the client-side. i.e., there are plenty more events on client-side (just look at custom events); and What happens on client-side and on server-side, during an event, could be completely irrelevant and decoupled; and What ever happens on client-side happens on client-side, there is no need for the server to know. Server should process and run what is given to them, how the process comes to life is not really up to them to decide in the event of the client-side events; and so and so forth. These are my thoughts obviously. I want to know what others think and if there has been any discussions on this topic. Topics branching from this argument can reach: Code management: is it easier to render everything from server-side? Separation of concern: is it easier if client-side logic is separated to server-side logic? Efficiency: which is more efficient both in terms of coding and running? At the end of the day, I am trying to move my team to go towards the second approach. There are lot of old guys in this team who are afraid of this change. I just wish to convince them with the right facts and stats. Let me know your thoughts.

    Read the article

  • Delphi LoadLibrary Failing to find DLL other directory - any good options?

    - by Chris Thornton
    Two Delphi programs need to load foo.dll, which contains some code that injects a client-auth certificate into a SOAP request. foo.dll resides in c:\fooapp\foo.dll and is normally loaded by c:\fooapp\foo.exe. That works fine. The other program needs the same functionality, but it resides in c:\program files\unwantedstepchild\sadapp.exe. Both aps load the DLL with this code: FOOLib := LoadLibrary('foo.dll'); ... If FOOLib <> 0 then begin FOOProc := GetProcAddress(FOOLib , 'xInjectCert'); FOOProc(myHttpRequest, Data, CertName); end; It works great for foo.exe, as the dll is right there. sadapp.exe fails to load the library, so FOOLib is 0, and the rest never gets called. The sadapp.exe program therefore silently fails to inject the cert, and when we test against production, it the cert is missing, do the connection fails. Obviously, we should have fully-qualified the path to the DLL. Without going into a lot of details, there were aspects of the testing that masked this problem until recently, and now it's basically too late to fix in code, as that would require a full regression test, and there isn't time for that. Since we've painted ourselves into a corner, I need to know if there are any options that I've overlooked. While we can't change the code (for this release), we CAN tweak the installer. I've found that placing c:\fooapp into the path works. As does adding a second copy of foo.dll directly into c:\program files\unwantedstepchild. c:\fooapp\foo.exe will always be running while sadapp.exe is running, so I was hoping that Windows would find it that way, but apparently not. Is there a way to tell Windows that I really want that same DLL? Maybe a manifest or something? This is the sort of "magic bullet" that I'm looking for. I know I can: Modify the windows path, probably in the installer. That's ugly. Add a second copy of the DLL, directly into the unwantedstepchild folder. Also ugly Delay the project while we code and test a proper fix. Unacceptable. Other? Thanks for any guidance, especially with "Other". I understand that this issue is not necessarily specific to Delphi. Thanks!

    Read the article

  • Rail 3 custom renderer: where do put this code?

    - by Derick Bailey
    I'm following along with Yehuda's example on how to build a custom renderer for Rails 3, according to this post: http://www.engineyard.com/blog/2010/render-options-in-rails-3/ I've got my code working, but I'm having a hard time figuring out where this code should live. Right now, I've got my code stuck right inside of my controller file. Doing this, everything works. When I move the code to the lib folder, though, I have explicitly 'require' my file in the controller that needs the renderer or it won't work. Yes, the file gets loaded when it sits in the lib folder, automatically. but the code to add the renderer isn't working for some reason, until I do a require on it. where should I put my code to add the renderer and mime type, so that rails 3 will pick it up and register it for me, without me having to manually require the file in my controller?

    Read the article

  • Do you have any tips for comments to keep them in step with the code? [closed]

    - by Rob Wells
    Possible Duplicate: How do you like your comments? G'day, I've read both of Steve McConnell's excellent Code Complete books "Code Complete" and "Code Complete 2" and was wondering if people have any other suggestions for commenting code. My commenting mantra could be summed up by the basic idea of expressing "what the code below cannot say". While enjoying this interesting blog post by Jeff about commenting I was still left wondering "When coding, when do you feel a comment is required?" Edit: Oops. Seems to be a duplicate of this question http://stackoverflow.com/questions/121945/how-do-you-like-your-comments so sorry for the noise. Thanks to my, seemingly, SO shadow for pointing it out - wouldn't have thought I was that interesting. Now off to read the original post and see if it is relevant. Edit: I meant to emphasise the best appraoch to ensure that your comments will stay in step with the code. Maybe expressing an intent rather than the mechansim for instance.

    Read the article

  • Should old/legacy/unused code be deleted from source control repository?

    - by Checkers
    I've encountered this in multiple projects. As the code base evolves, some libraries, applications, and components get abandoned and/or deprecated. Most people prefer to keep them in. The usual argument is that the code does not really take any space, it can be left alone until needed again. So a repository slowly turns into a cesspool of legacy code, where it's hard to find anything. Some people delete old code, since it creates clutter, raises more questions for new people, and you can restore any old snapshot of the code base anyway. However you can't always find the old code if you don't know where to look, as none of the (common) VCS I know offer search over the entire repository including all historical revisions, and the only way to search the old files is to check out the revision where the deleted file exists. What would be a good approach to repository management?

    Read the article

  • Shared WCF client code between .NET and Silverlight apps?

    - by Eduardo Scoz
    I'm developing a .NET application that will have both a WinForms and a Silverlight client. Although the majority of code will be in the server, I'll need to have quite a bit of logic in the clients as well, and I would like to keep the client library code the same. From what I could figure out so far, I need to have two different project types, a class library and a Silverlight class library, and link the files from one project to the other. This seems kind of lame, but it works for simple code. My problem, though, is that the code generated by the SVCUtil.exe to access WCF services is different from the code generated by the slsvcutil.exe, and the silverlight code is actually incompatible with the .NET one: I get a bunch of problems with the System.ServiceModel.Channel classes when I try to import the class into .NET. Has anybody done anything similar to this before? What am I doing wrong?

    Read the article

  • How do I configure NetBeans to only step through Java code that I've written.

    - by blissapp
    Am I missing something? I'm delighted that all that code is there showing how the generic collections work etc. However when I want to simply walk my code I'm forever finding myself going deeper into Java's own library code than I care to. Is it possible to simply disable that when stepping code - I want to treat all of that stuff as a Black Box, code stepping is just for stuff I've written. And you know what, now I've got that capability, is it possible to wrap up my own code that way too so that I can step just the bits I'm most interested in? And if i can't easily in netbeans, is it possible in eclipse? thanks

    Read the article

  • What will happen if the code can't finished on time...

    - by Tattat
    If I set a timer to execute a code every 3 seconds. If the code isn't finished in 3 seconds, what will happen? The computer will terminal the code or wait for the code finish or continue the timer, and execute the code with the unfinished code concurrently. int delay = 0; // delay for 0 sec. int period = 3000; // repeat 3 sec. Timer timer = new Timer(); timer.scheduleAtFixedRate(new TimerTask() { public void run() { // Task here ... // It may take more than 3 sec to finish, what will happen? } }, delay, period);

    Read the article

  • What is the quikest method to see actual color of any hex code #a7a7a7?

    - by metal-gear-solid
    What is the quikest method to see actual color of any hex code #a7a7a7? When i work on other's CSS then if i deal with color codes then i quickly wan to see the color of that particular hex code. suppose if i'm editing css in notepad and i found code #a7a7a7 then how can i know what is the color of this code. If i have a color on my screen then i quickly know what would be hex code for this with the help of some tools ,but i need just opposite of this. i'm not talking about to see whole color chart of site. I want to see color of particular hex code.

    Read the article

  • Which is the most practical way to add functionality to this piece of code?

    - by Adam Arold
    I'm writing an open source library which handles hexagonal grids. It mainly revolves around the HexagonalGrid and the Hexagon class. There is a HexagonalGridBuilder class which builds the grid which contains Hexagon objects. What I'm trying to achieve is to enable the user to add arbitrary data to each Hexagon. The interface looks like this: public interface Hexagon extends Serializable { // ... other methods not important in this context <T> void setSatelliteData(T data); <T> T getSatelliteData(); } So far so good. I'm writing another class however named HexagonalGridCalculator which adds some fancy pieces of computation to the library like calculating the shortest path between two Hexagons or calculating the line of sight around a Hexagon. My problem is that for those I need the user to supply some data for the Hexagon objects like the cost of passing through a Hexagon, or a boolean flag indicating whether the object is transparent/passable or not. My question is how should I implement this? My first idea was to write an interface like this: public interface HexagonData { void setTransparent(boolean isTransparent); void setPassable(boolean isPassable); void setPassageCost(int cost); } and make the user implement it but then it came to my mind that if I add any other functionality later all code will break for those who are using the old interface. So my next idea is to add annotations like @PassageCost, @IsTransparent and @IsPassable which can be added to fields and when I'm doing the computation I can look for the annotations in the satelliteData supplied by the user. This looks flexible enough if I take into account the possibility of later changes but it uses reflection. I have no benchmark of the costs of using annotations so I'm a bit in the dark here. I think that in 90-95% of the cases the efficiency is not important since most users wont't use a grid where this is significant but I can imagine someone trying to create a grid with a size of 5.000.000.000 X 5.000.000.000. So which path should I start walking on? Or are there some better alternatives? Note: These ideas are not implemented yet so I did not pay too much attention to good names.

    Read the article

  • Is there a good tutorial for figuring out what a website is doing so your program can do the same th

    - by brian d foy
    Is there a good guide or tutorial for people who need to programmatically interact with dynamic websites? There's been a rash of Perl questions about that lately, and I haven't found a good resource to point people toward. I'm asking not because I need one but because I don't want to waste my time writing it if it already exists. Although I'm most interested in Perl, the extra tools and techniques are mostly the same. Typically, I see see these problems in people's questions: Handling, setting, and saving cookies Finding and interacting with forms Handling JavaScript inside your user-agent especially things like onLoad, onSumbit, and Ajax Using HTTP sniffer tools Using Web developer plugins in interactive browsers Interacting with DOM, screen scraping, etc. If there's no good tutorial, I'll add it to my list of things to do (unless someone else wants to do it :). Along the way, if you don't have a suggestion for an existing tutorial, please suggest the things that you think should be in a new one, including links, your favorite tools, and your own user-agent development experiences. I don't care about the particular language you use.

    Read the article

  • Why is it not good to use $_SESSION in Restful Implementations?

    - by keisimone
    Original Question: i read that for RESTful websites. it is not good to use $_SESSION. Why is it not good? how then do i properly authenticate users without looking up database all the time to check for the user's roles? I read that it is not good to use $_SESSION. http://www.recessframework.org/page/towards-restful-php-5-basic-tips I am creating a WEBSITE, not web service in PHP. and i am trying to make it more RESTful. at least in spirit. right now i am rewriting all the action to use Form tags POST and add in a hidden value called _method which would be "delete" for deleting action and "put" for updating action. however, i am not sure why it is recommended NOT to use $_SESSION. i would like to know why and what can i do to improve. To allow easy authorization checking, what i did was to after logging in the user, the username is stored in the $_SESSION. Everytime the user navigates to a page, the page would check if the username is stored inside $_SESSION and then based on the $_SESSION retrieves all the info including privileges from the database and then evaluates the authorization to access the page based on the info retrieved. Is the way I am implementing bad? not RESTful? how do i improve performance and security? Thank you.

    Read the article

  • When is a good time to start thinking about scaling?

    - by Slokun
    I've been designing a site over the past couple days, and been doing some research into different aspects of scaling a site horizontally. If things go as planned, in a few months (years?) I know I'd need to worry about scaling the site up and out, since the resources it would end up consuming would be huge. So, this got me to thinking, when is the best time to start thinking about, and designing for, scalability? If you start too early on, you could easily over complicate your design, and make it impossible to actually build. You could also get too caught up in the details, the architecture, whatever, and wind up getting nothing done. Also, if you do get it working, but the site never takes off, you may have wasted a good chunk of extra effort. On the other hand, you could be saving yourself a ton of effort down the road. Designing it from the ground up to be big would make it much easier later on to let it grow big, with very little rewriting going on. I know for what I'm working on, I've decided to make at least a few choices now on the side of scaling, but I'm not going to do a complete change of thinking to get it to scale completely. Notably, I've redesigned my database from a conventional relational design to one similar to what was suggested on the Reddit site linked below, and I'm going to give memcache a try. So, the basic question, when is a good time to start thinking or worrying about scaling, and what are some good designs, tips, etc. for when doing so? A couple of things I've been reading, for those who are interested: http://www.codinghorror.com/blog/2009/06/scaling-up-vs-scaling-out-hidden-costs.html http://highscalability.com/blog/2010/5/17/7-lessons-learned-while-building-reddit-to-270-million-page.html http://developer.yahoo.com/performance/rules.html

    Read the article

  • Is it a good idea to use MySQL and Neo4j together?

    - by Sanoj
    I will make an application with a lot of similar items (millions), and I would like to store them in a MySQL database, because I would like to do a lot of statistics and search on specific values for specific columns. But at the same time, I will store relations between all the items, that are related in many connected binary-tree-like structures (transitive closure), and relation databases are not good at that kind of structures, so I would like to store all relations in Neo4j which have good performance for this kind of data. My plan is to have all data except the relations in the MySQL database and all relations with item_id stored in the Neo4j database. When I want to lookup a tree, I first search the Neo4j for all the item_id:s in the tree, then I search the MySQL-database for all the specified items in a query that would look like: SELECT * FROM items WHERE item_id = 45 OR item_id = 345435 OR item_id = 343 OR item_id = 78 OR item_id = 4522 OR item_id = 676 OR item_id = 443 OR item_id = 4255 OR item_id = 4345 Is this a good idea, or am I very wrong? I haven't used graph-databases before. Are there any better approaches to my problem? How would the MySQL-query perform in this case?

    Read the article

  • Replacing repetitively occuring loops with eval in Javascript - good or bad?

    - by Herc
    Hello stackoverflow! I have a certain loop occurring several times in various functions in my code. To illustrate with an example, it's pretty much along the lines of the following: for (var i=0;i<= 5; i++) { function1(function2(arr[i],i),$('div'+i)); $('span'+i).value = function3(arr[i]); } Where i is the loop counter of course. For the sake of reducing my code size and avoid repeating the loop declaration, I thought I should replace it with the following: function loop(s) { for (var i=0;i<= 5; i++) { eval(s); } } [...] loop("function1(function2(arr[i],i),$('div'+i));$('span'+i).value = function3(arr[i]);"); Or should I? I've heard a lot about eval() slowing code execution and I'd like it to work as fast as a proper loop even in the Nintendo DSi browser, but I'd also like to cut down on code. What would you suggest? Thank you in advance!

    Read the article

  • Understanding EDI 997.

    - by VishnuTiwariBlog
    Hi Guys, This is for the EDI starter. Below is the complete detail of EDI 997 segment and element details. 997 Functional Acknowledgment Transaction Layout: No. Seg ID Name Description Example M/O 010 ST Transaction Set Header To indicate the start of a transaction set and to assign a control number ST*997*382823~   M ST01   Code uniquely identifying a Transaction Set   M ST02   Identifying control number that must be unique within the transaction set functional group assigned by the originator for a transaction set   M 020 AK1 Functional Group Response Header To start acknowledgment of a functional group AK1*QM*2459823 M        AK101   Code identifying a group of application related transaction sets IN Invoice Information (810) SH Ship Notice/Manifest (856)     AK102   Assigned number originated and maintained by the sender     030 AK2 Transaction Set Response Header To start acknowledgment of a single transaction set AK2*856*001 M AK201   Code uniquely identifying a Transaction Set 810 Invoice 856 Ship Notice/Manifest   M AK202   Identifying control number that must be unique within the transaction set functional group assigned by the originator for a transaction set   M 040 AK3 Data Segment Note To report errors in a data segment and identify the location of the data segment AK3*TD3*9 O AK301 Segment ID Code Code defining the segment ID of the data segment in error (See Appendix A - Number 77)     AK302 Segment Position in Transaction Set The numerical count position of this data segment from the start of the transaction set: the transaction set header is count position 1     050 AK4 Data Element Note To report errors in a data element or composite data structure and identify the location of the data element AK4*2**2 O AK401 Position in Segment Code indicating the relative position of a simple data element, or the relative position of a composite data structure combined with the relative position of the component data element within the composite data structure, in error; the count starts with 1 for the simple data element or composite data structure immediately following the segment ID     AK402 Element Position in Segment This is used to indicate the relative position of a simple data element, or the relative position of a composite data structure with the relative position of the component within the composite data structure, in error; in the data segment the count starts with 1 for the simple data element or composite data structure immediately following the segment ID     AK403 Data Element Syntax Error Code Code indicating the error found after syntax edits of a data element 1 Mandatory Data Element Missing 2 Conditional Required Data Element Missing 3 Too Many Data Elements 4 Data Element Too Short 5 Data Element Too Long 6 Invalid Character in Data Element 7 Invalid Code Value 8 Invalid Date 9 Invalid Time 10 Exclusion Condition Violated     AK404 Copy of Bad Data Element This is a copy of the data element in error     060 AK5 AK5 Transaction Set Response Trailer To acknowledge acceptance or rejection and report errors in a transaction set AK5*A~ AK5*R*5~ M AK501 Transaction Set Acknowledgment Code Code indicating accept or reject condition based on the syntax editing of the transaction set A Accepted E Accepted But Errors Were Noted R Rejected     AK502 Transaction Set Syntax Error Code Code indicating error found based on the syntax editing of a transaction set 1 Transaction Set Not Supported 2 Transaction Set Trailer Missing 3 Transaction Set Control Number in Header and Trailer Do Not Match 4 Number of Included Segments Does Not Match Actual Count 5 One or More Segments in Error 6 Missing or Invalid Transaction Set Identifier 7 Missing or Invalid Transaction Set Control Number     070 AK9 Functional Group Response Trailer To acknowledge acceptance or rejection of a functional group and report the number of included transaction sets from the original trailer, the accepted sets, and the received sets in this functional group AK9*A*1*1*1~ AK9*R*1*1*0~ M AK901 Functional Group Acknowledge Code Code indicating accept or reject condition based on the syntax editing of the functional group A Accepted E Accepted, But Errors Were Noted. R Rejected     AK902 Number of Transaction Sets Included Total number of transaction sets included in the functional group or interchange (transmission) group terminated by the trailer containing this data element     AK903 Number of Received Transaction Sets Number of Transaction Sets received     AK904 Number of Accepted Transaction Sets Number of accepted Transaction Sets in a Functional Group     AK905 Functional Group Syntax Error Code Code indicating error found based on the syntax editing of the functional group header and/or trailer 1 Functional Group Not Supported 2 Functional Group Version Not Supported 3 Functional Group Trailer Missing 4 Group Control Number in the Functional Group Header and Trailer Do Not Agree 5 Number of Included Transaction Sets Does Not Match Actual Count 6 Group Control Number Violates Syntax     080 SE Transaction Set Trailer To indicate the end of the transaction set and provide the count of the transmitted segments (including the beginning (ST) and ending (SE) segments) SE*9*223~ M SE01 Number of Included Segments Total number of segments included in a transaction set including ST and SE segments     SE02 Transaction Set Control Number Identifying control number that must be unique within the transaction set functional group assigned by the originator for a transaction set

    Read the article

  • Understanding EDI 997

    - by VishnuTiwariBlog
    Hi Guys, This is for the EDI starter. Below is the complete detail of EDI 997 segment and element details. 997 Functional Acknowledgment Transaction Layout:   No. Seg ID Name Description Example M/O 010 ST Transaction Set Header To indicate the start of a transaction set and to assign a control number ST*997*382823~   M ST01   Code uniquely identifying a Transaction Set   M ST02   Identifying control number that must be unique within the transaction set functional group assigned by the originator for a transaction set   M 020 AK1 Functional Group Response Header To start acknowledgment of a functional group AK1*QM*2459823 M        AK101   Code identifying a group of application related transaction sets IN Invoice Information (810) SH Ship Notice/Manifest (856)     AK102   Assigned number originated and maintained by the sender     030 AK2 Transaction Set Response Header To start acknowledgment of a single transaction set AK2*856*001 M AK201   Code uniquely identifying a Transaction Set 810 Invoice 856 Ship Notice/Manifest   M AK202   Identifying control number that must be unique within the transaction set functional group assigned by the originator for a transaction set   M 040 AK3 Data Segment Note To report errors in a data segment and identify the location of the data segment AK3*TD3*9 O AK301 Segment ID Code Code defining the segment ID of the data segment in error (See Appendix A - Number 77)     AK302 Segment Position in Transaction Set The numerical count position of this data segment from the start of the transaction set: the transaction set header is count position 1     050 AK4 Data Element Note To report errors in a data element or composite data structure and identify the location of the data element AK4*2**2 O AK401 Position in Segment Code indicating the relative position of a simple data element, or the relative position of a composite data structure combined with the relative position of the component data element within the composite data structure, in error; the count starts with 1 for the simple data element or composite data structure immediately following the segment ID     AK402 Element Position in Segment This is used to indicate the relative position of a simple data element, or the relative position of a composite data structure with the relative position of the component within the composite data structure, in error; in the data segment the count starts with 1 for the simple data element or composite data structure immediately following the segment ID     AK403 Data Element Syntax Error Code Code indicating the error found after syntax edits of a data element 1 Mandatory Data Element Missing 2 Conditional Required Data Element Missing 3 Too Many Data Elements 4 Data Element Too Short 5 Data Element Too Long 6 Invalid Character in Data Element 7 Invalid Code Value 8 Invalid Date 9 Invalid Time 10 Exclusion Condition Violated     AK404 Copy of Bad Data Element This is a copy of the data element in error     060 AK5 AK5 Transaction Set Response Trailer To acknowledge acceptance or rejection and report errors in a transaction set AK5*A~ AK5*R*5~ M AK501 Transaction Set Acknowledgment Code Code indicating accept or reject condition based on the syntax editing of the transaction set A Accepted E Accepted But Errors Were Noted R Rejected     AK502 Transaction Set Syntax Error Code Code indicating error found based on the syntax editing of a transaction set 1 Transaction Set Not Supported 2 Transaction Set Trailer Missing 3 Transaction Set Control Number in Header and Trailer Do Not Match 4 Number of Included Segments Does Not Match Actual Count 5 One or More Segments in Error 6 Missing or Invalid Transaction Set Identifier 7 Missing or Invalid Transaction Set Control Number     070 AK9 Functional Group Response Trailer To acknowledge acceptance or rejection of a functional group and report the number of included transaction sets from the original trailer, the accepted sets, and the received sets in this functional group AK9*A*1*1*1~ AK9*R*1*1*0~ M AK901 Functional Group Acknowledge Code Code indicating accept or reject condition based on the syntax editing of the functional group A Accepted E Accepted, But Errors Were Noted. R Rejected     AK902 Number of Transaction Sets Included Total number of transaction sets included in the functional group or interchange (transmission) group terminated by the trailer containing this data element     AK903 Number of Received Transaction Sets Number of Transaction Sets received     AK904 Number of Accepted Transaction Sets Number of accepted Transaction Sets in a Functional Group     AK905 Functional Group Syntax Error Code Code indicating error found based on the syntax editing of the functional group header and/or trailer 1 Functional Group Not Supported 2 Functional Group Version Not Supported 3 Functional Group Trailer Missing 4 Group Control Number in the Functional Group Header and Trailer Do Not Agree 5 Number of Included Transaction Sets Does Not Match Actual Count 6 Group Control Number Violates Syntax     080 SE Transaction Set Trailer To indicate the end of the transaction set and provide the count of the transmitted segments (including the beginning (ST) and ending (SE) segments) SE*9*223~ M SE01 Number of Included Segments Total number of segments included in a transaction set including ST and SE segments     SE02 Transaction Set Control Number Identifying control number that must be unique within the transaction set functional group assigned by the originator for a transaction set

    Read the article

  • Microsoft Introduces WebMatrix

    - by Rick Strahl
    originally published in CoDe Magazine Editorial Microsoft recently released the first CTP of a new development environment called WebMatrix, which along with some of its supporting technologies are squarely aimed at making the Microsoft Web Platform more approachable for first-time developers and hobbyists. But in the process, it also provides some updated technologies that can make life easier for existing .NET developers. Let’s face it: ASP.NET development isn’t exactly trivial unless you already have a fair bit of familiarity with sophisticated development practices. Stick a non-developer in front of Visual Studio .NET or even the Visual Web Developer Express edition and it’s not likely that the person in front of the screen will be very productive or feel inspired. Yet other technologies like PHP and even classic ASP did provide the ability for non-developers and hobbyists to become reasonably proficient in creating basic web content quickly and efficiently. WebMatrix appears to be Microsoft’s attempt to bring back some of that simplicity with a number of technologies and tools. The key is to provide a friendly and fully self-contained development environment that provides all the tools needed to build an application in one place, as well as tools that allow publishing of content and databases easily to the web server. WebMatrix is made up of several components and technologies: IIS Developer Express IIS Developer Express is a new, self-contained development web server that is fully compatible with IIS 7.5 and based on the same codebase that IIS 7.5 uses. This new development server replaces the much less compatible Cassini web server that’s been used in Visual Studio and the Express editions. IIS Express addresses a few shortcomings of the Cassini server such as the inability to serve custom ISAPI extensions (i.e., things like PHP or ASP classic for example), as well as not supporting advanced authentication. IIS Developer Express provides most of the IIS 7.5 feature set providing much better compatibility between development and live deployment scenarios. SQL Server Compact 4.0 Database access is a key component for most web-driven applications, but on the Microsoft stack this has mostly meant you have to use SQL Server or SQL Server Express. SQL Server Compact is not new-it’s been around for a few years, but it’s been severely hobbled in the past by terrible tool support and the inability to support more than a single connection in Microsoft’s attempt to avoid losing SQL Server licensing. The new release of SQL Server Compact 4.0 supports multiple connections and you can run it in ASP.NET web applications simply by installing an assembly into the bin folder of the web application. In effect, you don’t have to install a special system configuration to run SQL Compact as it is a drop-in database engine: Copy the small assembly into your BIN folder (or from the GAC if installed fully), create a connection string against a local file-based database file, and then start firing SQL requests. Additionally WebMatrix includes nice tools to edit the database tables and files, along with tools to easily upsize (and hopefully downsize in the future) to full SQL Server. This is a big win, pending compatibility and performance limits. In my simple testing the data engine performed well enough for small data sets. This is not only useful for web applications, but also for desktop applications for which a fully installed SQL engine like SQL Server would be overkill. Having a local data store in those applications that can potentially be accessed by multiple users is a welcome feature. ASP.NET Razor View Engine What? Yet another native ASP.NET view engine? We already have Web Forms and various different flavors of using that view engine with Web Forms and MVC. Do we really need another? Microsoft thinks so, and Razor is an implementation of a lightweight, script-only view engine. Unlike the Web Forms view engine, Razor works only with inline code, snippets, and markup; therefore, it is more in line with current thinking of what a view engine should represent. There’s no support for a “page model” or any of the other Web Forms features of the full-page framework, but just a lightweight scripting engine that works with plain markup plus embedded expressions and code. The markup syntax for Razor is geared for minimal typing, plus some progressive detection of where a script block/expression starts and ends. This results in a much leaner syntax than the typical ASP.NET Web Forms alligator (<% %>) tags. Razor uses the @ sign plus standard C# (or Visual Basic) block syntax to delineate code snippets and expressions. Here’s a very simple example of what Razor markup looks like along with some comment annotations: <!DOCTYPE html> <html>     <head>         <title></title>     </head>     <body>     <h1>Razor Test</h1>          <!-- simple expressions -->     @DateTime.Now     <hr />     <!-- method expressions -->     @DateTime.Now.ToString("T")          <!-- code blocks -->     @{         List<string> names = new List<string>();         names.Add("Rick");         names.Add("Markus");         names.Add("Claudio");         names.Add("Kevin");     }          <!-- structured block statements -->     <ul>     @foreach(string name in names){             <li>@name</li>     }     </ul>           <!-- Conditional code -->        @if(true) {                        <!-- Literal Text embedding in code -->        <text>         true        </text>;    }    else    {        <!-- Literal Text embedding in code -->       <text>       false       </text>;    }    </body> </html> Like the Web Forms view engine, Razor parses pages into code, and then executes that run-time compiled code. Effectively a “page” becomes a code file with markup becoming literal text written into the Response stream, code snippets becoming raw code, and expressions being written out with Response.Write(). The code generated from Razor doesn’t look much different from similar Web Forms code that only uses script tags; so although the syntax may look different, the operational model is fairly similar to the Web Forms engine minus the overhead of the large Page object model. However, there are differences: -Razor pages are based on a new base class, Microsoft.WebPages.WebPage, which is hosted in the Microsoft.WebPages assembly that houses all the Razor engine parsing and processing logic. Browsing through the assembly (in the generated ASP.NET Temporary Files folder or GAC) will give you a good idea of the functionality that Razor provides. If you look closely, a lot of the feature set matches ASP.NET MVC’s view implementation as well as many of the helper classes found in MVC. It’s not hard to guess the motivation for this sort of view engine: For beginning developers the simple markup syntax is easier to work with, although you obviously still need to have some understanding of the .NET Framework in order to create dynamic content. The syntax is easier to read and grok and much shorter to type than ASP.NET alligator tags (<% %>) and also easier to understand aesthetically what’s happening in the markup code. Razor also is a better fit for Microsoft’s vision of ASP.NET MVC: It’s a new view engine without the baggage of Web Forms attached to it. The engine is more lightweight since it doesn’t carry all the features and object model of Web Forms with it and it can be instantiated directly outside of the HTTP environment, which has been rather tricky to do for the Web Forms view engine. Having a standalone script parser is a huge win for other applications as well – it makes it much easier to create script or meta driven output generators for many types of applications from code/screen generators, to simple form letters to data merging applications with user customizability. For me personally this is very useful side effect and who knows maybe Microsoft will actually standardize they’re scripting engines (die T4 die!) on this engine. Razor also better fits the “view-based” approach where the view is supposed to be mostly a visual representation that doesn’t hold much, if any, code. While you can still use code, the code you do write has to be self-contained. Overall I wouldn’t be surprised if Razor will become the new standard view engine for MVC in the future – and in fact there have been announcements recently that Razor will become the default script engine in ASP.NET MVC 3.0. Razor can also be used in existing Web Forms and MVC applications, although that’s not working currently unless you manually configure the script mappings and add the appropriate assemblies. It’s possible to do it, but it’s probably better to wait until Microsoft releases official support for Razor scripts in Visual Studio. Once that happens, you can simply drop .cshtml and .vbhtml pages into an existing ASP.NET project and they will work side by side with classic ASP.NET pages. WebMatrix Development Environment To tie all of these three technologies together, Microsoft is shipping WebMatrix with an integrated development environment. An integrated gallery manager makes it easy to download and load existing projects, and then extend them with custom functionality. It seems to be a prominent goal to provide community-oriented content that can act as a starting point, be it via a custom templates or a complete standard application. The IDE includes a project manager that works with a single project and provides an integrated IDE/editor for editing the .cshtml and .vbhtml pages. A run button allows you to quickly run pages in the project manager in a variety of browsers. There’s no debugging support for code at this time. Note that Razor pages don’t require explicit compilation, so making a change, saving, and then refreshing your page in the browser is all that’s needed to see changes while testing an application locally. It’s essentially using the auto-compiling Web Project that was introduced with .NET 2.0. All code is compiled during run time into dynamically created assemblies in the ASP.NET temp folder. WebMatrix also has PHP Editing support with syntax highlighting. You can load various PHP-based applications from the WebMatrix Web Gallery directly into the IDE. Most of the Web Gallery applications are ready to install and run without further configuration, with Wizards taking you through installation of tools, dependencies, and configuration of the database as needed. WebMatrix leverages the Web Platform installer to pull the pieces down from websites in a tight integration of tools that worked nicely for the four or five applications I tried this out on. Click a couple of check boxes and fill in a few simple configuration options and you end up with a running application that’s ready to be customized. Nice! You can easily deploy completed applications via WebDeploy (to an IIS server) or FTP directly from within the development environment. The deploy tool also can handle automatically uploading and installing the database and all related assemblies required, making deployment a simple one-click install step. Simplified Database Access The IDE contains a database editor that can edit SQL Compact and SQL Server databases. There is also a Database helper class that facilitates database access by providing easy-to-use, high-level query execution and iteration methods: @{       var db = Database.OpenFile("FirstApp.sdf");     string sql = "select * from customers where Id > @0"; } <ul> @foreach(var row in db.Query(sql,1)){         <li>@row.FirstName @row.LastName</li> } </ul> The query function takes a SQL statement plus any number of positional (@0,@1 etc.) SQL parameters by simple values. The result is returned as a collection of rows which in turn have a row object with dynamic properties for each of the columns giving easy (though untyped) access to each of the fields. Likewise Execute and ExecuteNonQuery allow execution of more complex queries using similar parameter passing schemes. Note these queries use string-based queries rather than LINQ or Entity Framework’s strongly typed LINQ queries. While this may seem like a step back, it’s also in line with the expectations of non .NET script developers who are quite used to writing and using SQL strings in code rather than using OR/M frameworks. The only question is why was something not included from the beginning in .NET and Microsoft made developers build custom implementations of these basic building blocks. The implementation looks a lot like a DataTable-style data access mechanism, but to be fair, this is a common approach in scripting languages. This type of syntax that uses simple, static, data object methods to perform simple data tasks with one line of code are common in scripting languages and are a good match for folks working in PHP/Python, etc. Seems like Microsoft has taken great advantage of .NET 4.0’s dynamic typing to provide this sort of interface for row iteration where each row has properties for each field. FWIW, all the examples demonstrate using local SQL Compact files - I was unable to get a SQL Server connection string to work with the Database class (the connection string wasn’t accepted). However, since the code in the page is still plain old .NET, you can easily use standard ADO.NET code or even LINQ or Entity Framework models that are created outside of WebMatrix in separate assemblies as required. The good the bad the obnoxious - It’s still .NET The beauty (or curse depending on how you look at it :)) of Razor and the compilation model is that, behind it all, it’s still .NET. Although the syntax may look foreign, it’s still all .NET behind the scenes. You can easily access existing tools, helpers, and utilities simply by adding them to the project as references or to the bin folder. Razor automatically recognizes any assembly reference from assemblies in the bin folder. In the default configuration, Microsoft provides a host of helper functions in a Microsoft.WebPages assembly (check it out in the ASP.NET temp folder for your application), which includes a host of HTML Helpers. If you’ve used ASP.NET MVC before, a lot of the helpers should look familiar. Documentation at the moment is sketchy-there’s a very rough API reference you can check out here: http://www.asp.net/webmatrix/tutorials/asp-net-web-pages-api-reference Who needs WebMatrix? Uhm… good Question Clearly Microsoft is trying hard to create an environment with WebMatrix that is easy to use for newbie developers. The goal seems to be simplicity in providing a minimal development environment and an easy-to-use script engine/language that makes it easy to get started with. There’s also some focus on community features that can be used as starting points, such as Web Gallery applications and templates. The community features in particular are very nice and something that would be nice to eventually see in Visual Studio as well. The question is whether this is too little too late. Developers who have been clamoring for a simpler development environment on the .NET stack have mostly left for other simpler platforms like PHP or Python which are catering to the down and dirty developer. Microsoft will be hard pressed to win those folks-and other hardcore PHP developers-back. Regardless of how much you dress up a script engine fronted by the .NET Framework, it’s still the .NET Framework and all the complexity that drives it. While .NET is a fine solution in its breadth and features once you get a basic handle on the core features, the bar of entry to being productive with the .NET Framework is still pretty high. The MVC style helpers Microsoft provides are a good step in the right direction, but I suspect it’s not enough to shield new developers from having to delve much deeper into the Framework to get even basic applications built. Razor and its helpers is trying to make .NET more accessible but the reality is that in order to do useful stuff that goes beyond the handful of simple helpers you still are going to have to write some C# or VB or other .NET code. If the target is a hobby/amateur/non-programmer the learning curve isn’t made any easier by WebMatrix it’s just been shifted a tad bit further along in your development endeavor when you run out of canned components that are supplied either by Microsoft or the community. The database helpers are interesting and actually I’ve heard a lot of discussion from various developers who’ve been resisting .NET for a really long time perking up at the prospect of easier data access in .NET than the ridiculous amount of code it takes to do even simple data access with raw ADO.NET. It seems sad that such a simple concept and implementation should trigger this sort of response (especially since it’s practically trivial to create helpers like these or pick them up from countless libraries available), but there it is. It also shows that there are plenty of developers out there who are more interested in ‘getting stuff done’ easily than necessarily following the latest and greatest practices which are overkill for many development scenarios. Sometimes it seems that all of .NET is focused on the big life changing issues of development, rather than the bread and butter scenarios that many developers are interested in to get their work accomplished. And that in the end may be WebMatrix’s main raison d'être: To bring some focus back at Microsoft that simpler and more high level solutions are actually needed to appeal to the non-high end developers as well as providing the necessary tools for the high end developers who want to follow the latest and greatest trends. The current version of WebMatrix hits many sweet spots, but it also feels like it has a long way to go before it really can be a tool that a beginning developer or an accomplished developer can feel comfortable with. Although there are some really good ideas in the environment (like the gallery for downloading apps and components) which would be a great addition for Visual Studio as well, the rest of the development environment just feels like crippleware with required functionality missing especially debugging and Intellisense, but also general editor support. It’s not clear whether these are because the product is still in an early alpha release or whether it’s simply designed that way to be a really limited development environment. While simple can be good, nobody wants to feel left out when it comes to necessary tool support and WebMatrix just has that left out feeling to it. If anything WebMatrix’s technology pieces (which are really independent of the WebMatrix product) are what are interesting to developers in general. The compact IIS implementation is a nice improvement for development scenarios and SQL Compact 4.0 seems to address a lot of concerns that people have had and have complained about for some time with previous SQL Compact implementations. By far the most interesting and useful technology though seems to be the Razor view engine for its light weight implementation and it’s decoupling from the ASP.NET/HTTP pipeline to provide a standalone scripting/view engine that is pluggable. The first winner of this is going to be ASP.NET MVC which can now have a cleaner view model that isn’t inconsistent due to the baggage of non-implemented WebForms features that don’t work in MVC. But I expect that Razor will end up in many other applications as a scripting and code generation engine eventually. Visual Studio integration for Razor is currently missing, but is promised for a later release. The ASP.NET MVC team has already mentioned that Razor will eventually become the default MVC view engine, which will guarantee continued growth and development of this tool along those lines. And the Razor engine and support tools actually inherit many of the features that MVC pioneered, so there’s some synergy flowing both ways between Razor and MVC. As an existing ASP.NET developer who’s already familiar with Visual Studio and ASP.NET development, the WebMatrix IDE doesn’t give you anything that you want. The tools provided are minimal and provide nothing that you can’t get in Visual Studio today, except the minimal Razor syntax highlighting, so there’s little need to take a step back. With Visual Studio integration coming later there’s little reason to look at WebMatrix for tooling. It’s good to see that Microsoft is giving some thought about the ease of use of .NET as a platform For so many years, we’ve been piling on more and more new features without trying to take a step back and see how complicated the development/configuration/deployment process has become. Sometimes it’s good to take a step - or several steps - back and take another look and realize just how far we’ve come. WebMatrix is one of those reminders and one that likely will result in some positive changes on the platform as a whole. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET   IIS7  

    Read the article

  • How do you update live web sites with code changes?

    - by Aaron Anodide
    I know this is a very basic question. If someone could humor me and tell me how they would handle this, I'd be greatful. I decided to post this because I am about to install SynchToy to remedy the issue below, and I feel a bit unprofessional using a "Toy" but I can't think of a better way. Many times I find when I am in this situation, I am missing some painfully obvious way to do things - this comes from being the only developer in the company. ASP.NET web application developed on my computer at work Solution has 2 projects: Website (files) WebsiteLib (C#/dll) Using a Git repository Deployed on a GoGrid 2008R2 web server Deployment: Make code changes. Push to Git. Remote desktop to server. Pull from Git. Overwrite the live files by dragging/dropping with windows explorer. In Step 5 I delete all the files from the website root.. this can't be a good thing to do. That's why I am about to install SynchToy... UPDATE: THANKS for all the useful responses. I can't pick which one to mark answer - between using a web deployment - it looks like I have several useful suggesitons: Web Project = whole site packaged into a single DLL - downside for me I can't push simple updates - being a lone developer in a company of 50, this remains something that is simpler at times. Pulling straight from SCM into web root of site - i originally didn't do this out of fear that my SCM hidden directory might end up being exposed, but the answers here helped me get over that (although i still don't like having one more thing to worry about forgetting to make sure is still true over time) Using a web farm, and systematically deploying to nodes - this is the ideal solution for zero downtime, which is actually something I care about since the site is essentially a real time revenue source for my company - i might have a hard time convincing them to double the cost of the servers though. -- finally, the re-enforcement of the basic principal that there needs to be a single click deployment for the site OR ELSE THERE SOMETHING WRONG is probably the most useful thing I got out of the answers. UPDATE 2: I thought I come back to this and update with the actual solution that's been in place for many months now and is working perfectly (for my single web server solution). The process I use is: Make code changes Push to Git Remote desktop to server Pull from Git Run the following batch script: cd C:\Users\Administrator %systemroot%\system32\inetsrv\appcmd.exe stop site "/site.name:Default Web Site" robocopy Documents\code\da\1\work\Tree\LendingTreeWebSite1 c:\inetpub\wwwroot /E /XF connectionsconfig Web.config %systemroot%\system32\inetsrv\appcmd.exe start site "/site.name:Default Web Site" As you can see this brings the site down, uses robocopy to intelligently copy the files that have changed then brings the site back up. It typically runs in less than 2 seconds. Since peak traffic on this site is about 2 requests per second, missing 4 requests per site update is acceptable. Sine I've gotten more proficient with Git I've found that the first four steps above being a "manual process" is also acceptable, although I'm sure I could roll the whole thing into a single click if I wanted to. The documentation for AppCmd.exe is here. The documentation for Robocopy is here.

    Read the article

  • how to write the code for this program specially in mathematica? [closed]

    - by asd
    I implemented a solution to the problem below in Mathematica, but it takes a very long time (hours) to compute f of kis or the set B for large numbers. Somebody suggested that implementing this in C++ resulted in a solution in less than 10 minutes. Would C++ be a good language to learn to solve these problems, or can my Mathematica code be improved to fix the performance issues? I don't know anything about C or C++ and it should be difficult to start to learn this languages. I prefer to improve or write new code in mathematica. Problem Description Let $f$ be an arithmetic function and A={k1,k2,...,kn} are integers in increasing order. Now I want to start with k1 and compare f(ki) with f(k1). If f(ki)f(k1), put ki as k1. Now start with ki, and compare f(kj) with f(ki), for ji. If f(kj)f(ki), put kj as ki, and repeat this procedure. At the end we will have a sub sequence B={L1,...,Lm} of A by this property: f(L(i+1))f(L(i)), for any 1<=i<=m-1 For example, let f is the divisor function of integers. Here I put some part of my code and this is just a sample and the question in my program could be more larger than these: «««««««««««««««««««««««««««««««««««« f[n_] := DivisorSigma[0, n]; g[n_] := Product[Prime[i], {i, 1, PrimePi[n]}]; k1 = g[67757] g[353] g[59] g[19] g[11] g[7] g[5]^2 6^3 2^7; k2 = g[67757] g[353] g[59] g[19] g[11] g[7] g[5] 6^5 2^7; k3 = g[67757] g[359] g[53] g[19] g[11] g[7] g[5] 6^4 2^7; k4 = g[67759] g[349] g[53] g[19] g[11] g[7] g[5] 6^5 2^6; k5 = g[67757] g[359] g[53] g[19] g[11] g[7] g[5] 6^4 2^8; k6 = g[67759] g[349] g[53] g[19] g[11] g[7] g[5]^2 6^3 2^7; k7 = g[67757] g[359] g[53] g[19] g[11] g[7] g[5] 6^5 2^6; k8 = g[67757] g[359] g[53] g[19] g[11] g[7] g[5] 6^4 2^9; k9 = g[67757] g[359] g[53] g[19] g[11] g[7] g[5]^2 6^3 2^7; k10 = g[67757] g[359] g[53] g[19] g[11] g[7] g[5] 6^5 2^7; k11 = g[67759] g[349] g[53] g[19] g[11] g[7] g[5]^2 6^4 2^6; k12 = g[67757] g[359] g[53] g[19] g[11] g[7] g[5]^2 6^3 2^8; k13 = g[67757] g[359] g[53] g[19] g[11] g[7] g[5]^2 6^4 2^6; k14 = g[67757] g[359] g[53] g[19] g[11] g[7] g[5]^2 6^3 2^9; k15 = g[67757] g[359] g[53] g[19] g[11] g[7] g[5]^2 6^4 2^7; k16 = g[67757] g[359] g[53] g[23] g[11] g[7] g[5] 6^4 2^8; k17 = g[67757] g[359] g[59] g[19] g[11] g[7] g[5] 6^4 2^7; k18 = g[67757] g[359] g[53] g[23] g[11] g[7] g[5] 6^4 2^9; k19 = g[67759] g[353] g[53] g[19] g[11] g[7] g[5] 6^4 2^6; k20 = g[67763] g[347] g[53] g[19] g[11] g[7] g[5] 6^4 2^7; k = Table[k1, k2, k3, k4, k5, k6, k7, k8, k9, k10, k11, k12, k13, k14, k15, k16, k17, k18, k19, k20]; i = 1; count = 0; For[j = i, j <= 20, j++, If[f[k[[j]]] - f[k[[i]]] > 0, i = j; Print["k",i]; count = count + 1]]; Print["count= ", count] ««««««««««««««««««««««««««««««««««««

    Read the article

< Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >