Search Results

Search found 20566 results on 823 pages for 'folder structure'.

Page 663/823 | < Previous Page | 659 660 661 662 663 664 665 666 667 668 669 670  | Next Page >

  • Class; Struct; Enum confusion, what is better?

    - by Angel Brighteyes
    I have 46 rows of information, 2 columns each row ("Code Number", "Description"). These codes are returned to the client dependent upon the success or failure of their initial submission request. I do not want to use a database file (csv, sqlite, etc) for the storage/access. The closest type that I can think of for how I want these codes to be shown to the client is the exception class. Correct me if I'm wrong, but from what I can tell enums do not allow strings, though this sort of structure seemed the better option initially based on how it works (e.g. 100 = "missing name in request"). Thinking about it, creating a class might be the best modus operandi. However I would appreciate more experienced advice or direction and input from those who might have been in a similar situation. Currently this is what I have: class ReturnCode { private int _code; private string _message; public ReturnCode(int code) { Code = code; } public int Code { get { return _code; } set { _code = value; _message = RetrieveMessage(value); } } public string Message { get { return _message; } } private string RetrieveMessage(int value) { string message; switch (value) { case 100: message = "Request completed successfuly"; break; case 201: message = "Missing name in request."; break; default: message = "Unexpected failure, please email for support"; break; } return message; } }

    Read the article

  • jquery select elements between two elements that are not siblings

    - by naugtur
    eg. [I've removed attributes, but it's a bit of auto-generated html] <img class="p"/> <div> hello world <p> <font><font size="2">text.<img class="p"/> some text</font></font> </p> <img class="p"/> <p> <font><font size="2">more text<img class="p"/> another piece of text </font></font> </p><img class="p"/> some text on the end </div> I need to apply some highlighting with backgrounds to all text that is between two closest [in the html code] img.p elements when hovering first of them. I have no idea how to do that. Lets say I hover the first img.p - it should highlight hello world and text. and nothing else. And now the worst part - I need the backgrounds to disappear on mouseleave. I need it to work with any HTML mess possible. The above is just an example and structure of the documents will differ. [tip. Processing the whole html before binding hover and putting some spans etc. is ok as long as it doesn't change the looks of the output document]

    Read the article

  • Remove accents from a JSON response using the raw content.

    - by Pentium10
    This is a follow up of this question: Remove accents from a JSON response. The accepted answer there works for a single item/string of a raw JSON content. But I would like to run a full transformation over the entire raw content of the JSON without parsing each object/array/item. What I've tried is this function removeAccents($jsoncontent) { $obj=json_decode($jsoncontent); // use decode to transform the unicode chars to utf $content=serialize($obj); // serialize into string, so the whole obj structure can be used string as a whole $a = 'ÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõöøùúûýýþÿRr'; $b = 'aaaaaaaceeeeiiiidnoooooouuuuybsaaaaaaaceeeeiiiidnoooooouuuyybyRr'; $content=utf8_decode($content); $jsoncontent = strtr($content, $a, $b); // at this point the accents are removed, and everything is good echo $jsoncontent; $obj=unserialize($jsoncontent); // this unserialization is returning false, probably because we messed up with the serialized string return json_encode($obj); } As you see after I decoded JSON content, I serialized the object to have a string of it, than I remove the accents from that string, but this way I have problem building back the object, as the unserialize stuff returns false. How can I fix this?

    Read the article

  • PHP - Counting matching arrays in array

    - by Sergio
    hi there, I have an array structure that looks like this: Array ( [0] => Array ( [type] => image [data] => Array ( [id] => 1 [alias] => test [caption] => no caption [width] => 200 [height] => 200 ) ) [1] => Array ( [type] => image [data] => Array ( [id] => 2 [alias] => test2 [caption] => hello there [width] => 150 [height] => 150 ) ) ) My question is, how can I get a count of the number of embedded arrays that have their type set as image (or anything else for that matter)? In practise this value can vary. So, the above array would give me an answer of 2. Thanks

    Read the article

  • Temperature anomaly calculation of time series data

    - by neel
    I have a time series like following: Data <- structure(list(Year = c(1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1991L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L, 1992L), Month = c(8L, 9L, 9L, 9L, 10L, 10L, 10L, 11L, 11L, 11L, 12L, 12L, 12L, 1L, 1L, 1L, 2L, 2L, 2L, 3L, 3L, 3L, 4L, 4L, 4L, 5L, 5L, 5L, 6L, 6L, 6L, 7L, 7L, 7L, 8L, 8L, 8L, 9L, 9L, 9L, 10L, 10L, 10L, 11L, 11L, 11L, 12L, 12L, 12L), Day = c(30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 28L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L, 10L, 20L, 30L), Hour = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L), temperature = c(72.5, 64, 62.5, 64, 64, 53, 52, 52, 45.5, 49, 50, 50, 59, 63.5, 69.5, 61, 61, NaN, NaN, 39.5, 37, 45.5, 45, 39, 43.5, 52, 53, 56, 64, 66, 66.5, 73.5, 81, 85, 89.5, 87.5, 88.5, 83, 84.5, 74, 60.5, 59, 53, 60.5, 62.5, 64.5, 63, 62, 65.5)), .Names = c("Year", "Month", "Day", "Hour", "temperature"), class = "data.frame", row.names = c(NA, -49L)) and I have to calculate standardized anomaly. The steps to calculate the anomalies are following: Monthly premature departures from the long-term (1991-2007) average are obtained. Then standardized by dividing by the standard deviation of monthly temperature. The standardized monthly anomalies are then weighted by multiplying by the fraction of the average temperature for the given month. These weighted anomalies are then summed over 3 month time period. Can you please help me?

    Read the article

  • How do I scrape information off ASP.NET websites when paging and JavaScript links are being used?

    - by Ian Roke
    I have been given a staff list which is supposed to be up to date but it doesn't match an intranet People Finder which is written in ASP.NET. As the information is sensitive I am not able to access the database the People Finder is using so the only way I can get at the information is by scraping the structure starting at the top brass at the top and then going through each tier in turn. Each person has a Staff number which then forms the URL http://intranet/peoplefinder/index.aspx?srn=ABC1234 and then all the people who report to them are listed underneth in the format <a id="gvEmployees_ctl03_lnkFullName" href="index.aspx?srn=ABC4321" target="_self"> where each URL indicates the Staff number and provides a link to their team. The trouble arises when the teams are big as paging is implemented in the GridView with an URL such as <a href="javascript:__doPostBack('gvEmployees','Page$2')">2</a>. How would I scrape this page, capture the SRN and other details along with the people who report to the person on all pages of the GridView then loop through each reportee and do the same process until the whole list is complete?

    Read the article

  • How to Create Element in XSLT When Copying Using Templates

    - by John Dumont
    Hello, I'm trying to create an element in an XML where the basic content is copied and modified. My XML is something like <root> <node> <child>value</child> <child2>value2</child2> </node> <node2>bla</node2> </root> The number of children of node may change as well as the children of root. The XSLT should copy the whole content, modify some values and add some new. The copying and modifying is no problem: <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="2.0"> <xsl:output method="xml" encoding="UTF-8"/> <xsl:template match="@*|node()"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> </xsl:template> </xsl:stylesheet> (+ further templates for modifications). But how do I add a new element in this structure on some path, e.g. I want to add an element as the last element of the "node" node. The "node" element itself always exists.

    Read the article

  • Setting up a multi-site CMS, collecting thoughts about the DB schema

    - by Ben Fransen
    Hello all, I'm collecting some thoughts about creating a multisite CMS. In my opinion there are two major approaches. All data is stored into 1 database, giving me the advantage of single point of updates; Seperated databases, so each client has its own database. Giving me the advantage to measure bandwith. Option 1 gives me the disadvantage of measuring bandwith while option is giving me the disadvantage of a single point of update structure. Are there any generic approaches for creating a sort of update system? So my clients can download a small package (maybe a zip with a conf file to tell the updatescript where to put all the files and how to extend the database??) Do you guys have some thougths about the best solution for a situation like this? I have my own webserver, full access to all resources and I'm developing in PHP with MySQL as DBMS. I hope to hear from you and I surely appreciate any effort you make to help me further! Greets from Holland, Ben Fransen

    Read the article

  • Need help choosing database server

    - by The Pretender
    Good day everyone. Recently I was given a task to develop an application to automate some aspects of stocks trading. While working on initial architecture, the database dilemma emerged. What I need is a fast database engine which can process huge amounts of data coming in very fast. I'm fairly experienced in general programming, but I never faced a task of developing a high-load database architecture. I developed a simple MSSQL database schema with several many-to-many relationships during one of my projects, but that's it. What I'm looking for is some advice on choosing the most suitable database engine and some pointers to various manuals or books which describe high-load database development. Specifics of the project are as follows: OS: Windows NT family (Server 2008 / 7) Primary platform: .NET with C# Database structure: one table to hold primary items and two or three tables with foreign keys to the first table to hold additional information. Database SELECT requirements: Need super-fast selection by foreign keys and by combination of foreign key and one of the columns (presumably DATETIME) Database INSERT requirements: The faster the better :) If there'll be significant performance gain, some parts can be written in C++ with managed interfaces to the rest of the system. So once again: given all that stuff I just typed, please give me some advice on what the best database for my project is. Links or references to some manuals and books on the subject are also greatly appreciated. EDIT: I'll need to insert 3-5 rows in 2 tables approximately once in 30-50 milliseconds and I'll need to do SELECT with 0-2 WHERE clauses queries with similar rate.

    Read the article

  • svn import, dont modify revision OR modify the list of files in a transaction

    - by Vaughan Durno
    Hi Ive gained so much knowledge/insight from this site in the past few years, now im actually hoping to get some enlightenment. The scenario is as follows: You have the general structure of the repo (trunk,branches,tags) but added to the layout you have another directory called 'db_revs'. Now in the pre-commit, you take a dump of a specific database (the specifics are irrelevant) into a temporary file, say /tmp/REV.sql (REV being the HEAD revision number of the repo, or the transaction). K all is well and you can just import that temp file into the repo at /db_revs/REV.sql Now obviously that import, even tho its happening during a commit, increments the revision of the repo. So when u do a commit at some point to say 'test.php' in the trunk and it completes at say revision 159, then the pre-commit runs as it should and the DB dump gets imported but then u r sitting with a tree in the repo-browser where 'trunk' is at revision 159, and 'db_revs', which has the imported dump, is at 158 (Ive made it so that the filename matches the revision ie: 159.sql but that file is then at revision 158). NB If you're doing an import in a pre-commit, you need to add some logic to not perform the import, say by checking first for the existence of the temp file, otherwise it will cause, um, a stack overflow and your PC will quickly crawl to a stand still So I wanted to know if it was possible to make an import to not commit its changes. I realise I might be barking up the wrong tree to begin with so I have another idea of doing this so that brings me to the 2nd part of my question, would it be possible to modify the list of files that the transaction is about to commit to the repo. I know this can be done to a WC but that wont help as a WC is a checked out copy of say the trunk so im not sure how u would add a file to the 'db_revs' folder which is above trunk? Any help is greatly appreciated Cheers Vaughan

    Read the article

  • Forms Auth: have different credentials for a subdirectory?

    - by Fyodor Soikin
    My website has forms authentication, and all is well. Now I want to create a subdirectory and have it also password-protected, but! I need the subdirectory to use a completely different set of logins/passwords than the whole website uses. Say, for example, I have users for the website stored in the "Users" table in a database. But for the subdirectory, I want the users to be taken from the "SubdirUsers" table. Which probably has a completely different structure. Consequently, I need the logins to be completely parallel, as in: Logging into the whole website does not make you logged into the subdirectory as well Clicking "logout" on the whole website does not nullify your login in the subdirectory And vice versa I do not want to create a separate virtual application for the subdirectory, because I want to share all libraries, user controls, as well as application state and cache. In other words, it has to be the same application. I also do not want to just add a flag to the "Users" table indicating whether this is a whole website user or the subdirectory user. User lists have to come from different sources. For now, the only option that I see is to roll my own Forms Auth for the subdirectory. Anybody can propose a better alternative?

    Read the article

  • Batch Geocoding with Garmin Mapsource

    - by Mike Trader
    Please Note: I AM NOT LOOKING FOR AN ALTERNATIVE SOLUTION I lost track of this effort years ago but have need to geocode thousands of addresses nightly. I must use the very accurate database sitting on the machine, installed when the Nuvi map update installed Mapsource. When I contacted Garmin years ago, they expressed an interest in providing an API for this, but then I heard nothing and did not follow up. Their database is provided by navtec? I believe. Anyone have experience with that format? I posted on the Garmin Developer forum a while ago, but its a little lethargic over there :) Has anyone done this? Does anyone know how it might be done without an API; meaning database structure and calls? I'll take a solution in any language. Added: Garmin has expressed an interest in making this available to me. They just have not done it. I do not know the database format. I am NOT looking for an online solution or any other "alternative". This question is very specific. Contact Info: MikeTrader2 A T gmail D O T com Added: I offered a 400 pt bounty for this. Jeff Atwood then offered 400pts also. If you would like to see a solution to this, vote up the question and I will chase up Garmin and show there is interest in finally providing this. Please Note: I AM NOT LOOKING FOR AN ALTERNATIVE SOLUTION

    Read the article

  • Don't fire onfocus when selecting text?

    - by Casey Hope
    I'm writing a JavaScript chatting application, but I'm running into a minor problem. Here is the HTML structure: <div id="chat"> <div id="messages"></div> <textarea></textarea> </div> When the user clicks/focuses on the chat box, I want the textbox to be automatically focused. I have this onfocus handler on the chat box: chat.onfocus = function () { textarea.focus(); } This works, but the problem is that in Firefox, this makes it impossible to select text in the messages div, since when you try to click on it, the focus shifts to the textarea. How can I avoid this problem? (Semi-related issues: In Chrome, textarea.focus() doesn't seem to shift the keyboard focus to the textarea; it only highlights the box. IE8 does not seem to respond to the onfocus at all when clicking, even if it tabindex is set. Any idea why?)

    Read the article

  • How best to use XPath with very large XML files in .NET?

    - by glenatron
    I need to do some processing on fairly large XML files ( large here being potentially upwards of a gigabyte ) in C# including performing some complex xpath queries. The problem I have is that the standard way I would normally do this through the System.XML libraries likes to load the whole file into memory before it does anything with it, which can cause memory problems with files of this size. I don't need to be updating the files at all just reading them and querying the data contained in them. Some of the XPath queries are quite involved and go across several levels of parent-child type relationship - I'm not sure whether this will affect the ability to use a stream reader rather than loading the data into memory as a block. One way I can see of making it work is to perform the simple analysis using a stream-based approach and perhaps wrapping the XPath statements into XSLT transformations that I could run across the files afterward, although it seems a little convoluted. Alternately I know that there are some elements that the XPath queries will not run across, so I guess I could break the document up into a series of smaller fragments based on it's original tree structure, which could perhaps be small enough to process in memory without causing too much havoc. I've tried to explain my objective here so if I'm barking up totally the wrong tree in terms of general approach I'm sure you folks can set me right...

    Read the article

  • Documentation for java.util.concurrent.locks.ReentrantReadWriteLock

    - by Andrei Taptunov
    Disclaimer: I'm not very good at Java and just comparing read/writer locks between C# and Java to understand this topic better & decisions behind both implementations. There is JavaDoc about ReentrantReadWriteLock. It states the following about upgrade/downgrade for locks: Lock downgrading ... However, upgrading from a read lock to the write lock is not possible. It also has the following example that shows manual upgrade from read lock to write lock: // Here is a code sketch showing how to exploit reentrancy // to perform lock downgrading after updating a cache void processCachedData() { rwl.readLock().lock(); if (!cacheValid) { // upgrade lock manually #1: rwl.readLock().unlock(); // must unlock first to obtain writelock #2: rwl.writeLock().lock(); if (!cacheValid) { // recheck ... } ... } use(data); rwl.readLock().unlock(); Does it mean that actually the sample from above may not behave correctly in some cases - I mean there is no lock between lines #1 & #2 and underlying structure is exposed to changes from other threads. So it can not be considered as the correct way to upgrade the lock or do I miss something here?

    Read the article

  • Entity Framework self referencing entity deletion.

    - by Viktor
    Hello. I have a structure of folders like this: Folder1 Folder1.1 Folder1.2 Folder2 Folder2.1 Folder2.1.1 and so on.. The question is how to cascade delete them(i.e. when remove folder2 all children are also deleted). I can't set an ON DELETE action because MSSQL does not allow it. Can you give some suggesions? UPDATE: I wrote this stored proc, can I just leave it or it needs some modifications? SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE PROCEDURE sp_DeleteFoldersRecursive @parent_folder_id int AS BEGIN SET NOCOUNT ON; IF @parent_folder_id = 0 RETURN; CREATE TABLE #temp(fid INT ); DECLARE @Count INT; INSERT INTO #temp(fid) SELECT FolderId FROM Folders WHERE FolderId = @parent_folder_id; SET @Count = @@ROWCOUNT; WHILE @Count > 0 BEGIN INSERT INTO #temp(fid) SELECT FolderId FROM Folders WHERE EXISTS (SELECT FolderId FROM #temp WHERE Folders.ParentId = #temp.fid) AND NOT EXISTS (SELECT FolderId FROM #temp WHERE Folders.FolderId = #temp.fid); SET @Count = @@ROWCOUNT; END DELETE Folders FROM Folders INNER JOIN #temp ON Folders.FolderId = #temp.fid; DROP TABLE #temp; END GO

    Read the article

  • Hide list on blur

    - by bah
    Hi, I have a list showing up when i click on button, the problem is that i want to hide the list whenever it loses focus i.e. user clicks anywhere on a page, but not on that list. How can i do this? Also, you can see how it should look at last.fm, where's list of profile settings (li.profileItem). (ul.del li ul by default is display:none) Basic list structure: <ul class='del'> <li> <button class='deletePage'></button> <ul> <li></li> <li></li> </ul> </li> </ul> My function for displaying this list: $(".deletePage").click(function(){ if( $(this).siblings(0).css("display") == "block" ){ $(this).siblings(0).hide(); } else { $(this).siblings(0).show(); } });

    Read the article

  • Please help me with database connection types in Windows.

    - by Hamish Grubijan
    Sorry for a badly-phrased question. I have a requirement coming from a non-technical person that I need to make sense of. I am basically told: "Here use 'Driver={SQL Server};Server=SERVERNAME\INSTANCENAME;Database=DATABASENAME;Uid=UNAME;Pwd=PASSWORD;'; and here use Server=SERVERNAME\INSTANCENAME;Database=DATABASENAME;Uid=UNAME;Pwd=PASSWORD;';. I am getting no additional help here. While this seems to miraculously fix a bug, I want to understand what is it that I am changing, so I can leave a proper comment for the next developer, plus I can structure the code and name variables differently, depending on the meaning of the change. The work revolves around ASP.net, reporting, SQL Server 2008. Please give me some examples about when you would use one connection string over another. Feel free to edit the question if you can see a way to improve it. When can you lose a 'Driver={SQL Server};? Thank you. EDIT: SQL Server 2008 is the target database, but other can be used ... or maybe will be used in the future.

    Read the article

  • jQuery .load() and sub-pages

    - by user354051
    Hi, I am not just a newbie with Javascript. I am developing a simple site to get my hands on web programming. The web site is simple and structure is like this: A simple ul/li/css based navigation menu Sub pages are loaded in "div" using jQuery, when ever user click appropriate menu item. Some of the sub-pages are using different jQuery based plugins such as LightWindow, jTip etc. The jQuery function that loads the sub-page is like this: function loadContent(htmlfile){ jQuery("#content").load(htmlfile); }; The menu items fires loadContent method like this: <li><a href="javascript:void(0)" onclick="loadContent('overview.html');return false">overview</a></li> This loads a sub-page name "overview.html" inside the "div". That's it. Now this is working fine but some of the sub-pages using jQuery based plugins are not working well when loaded inside the "div". If you load them individually in the browser they are working fine. Based on above I have few qustions: Most of the plugins are based on jQuery and sub-pages are loaded inside the "index.html" using "loadContent" function. Do I have to call <script type="text/javascript" src="js/jquery-1.4.2.min.js"></script> on each and every sub-page? If a page is using a custom jQuery plugin, then where do I call it? In "index.html" or on the page where I am using it? I think what ever script you will call in "index.html", you don't to have call them again in any of the sub pages you are using. Am I right here? Thanks Prashant

    Read the article

  • Is there a PHP library that performs MySQL Data Validation and Sanitization According to Column Type

    - by JW
    Do you know of any open source library or framework that can perform some basic validation and escaping functionality for a MySQL Db. i envisage something along the lines of: //give it something to perform the quote() quoteInto() methods $lib->setSanitizor($MyZend_DBAdaptor); //tell it structure of the table - colnames/coltypes/ etc $lib->setTableDescription($tableDescArray); //use it to validate and escape according to coltype foreach ($prospectiveData as $colName => $rawValue) if ( $lib->isValid($colName, $rawValue)) { //add it to the set clause $setValuesArray[$lib->escapeIdentifier($colName);] = $lib->getEscapedValue($colName,$rawValue); } else { throw new Exception($colName->getErrorMessage()); } etc... I have looked into - Zend_Db_Table (which knows about a table's description), and - Zend_Db_Adaptor (which knows how to escape/sanitize values depending on TYPE) but they do not automatically do any clever stuff during updates/inserts Anyone know of a good PHP library to preform this kind of validation that I could use rather than writing my own? i envisage alot of this kind of stuff: ... elseif (eregi('^INT|^INTEGER',$dataset_element_arr[col_type])) { $datatype='int'; if (eregi('unsigned',$dataset_element_arr[col_type])) { $int_max_val=4294967296; $int_min_val=0; } else { $int_max_val=2147483647; $int_min_val=-2147483648; } } (p.s I know eregi is deprecated - its just an example of laborious code)

    Read the article

  • Parsing/Tokenizing a String Containing a SQL Command

    - by Alan Storm
    Are there any open source libraries (any language, python/PHP preferred) that will tokenize/parse an ANSI SQL string into its various components? That is, if I had the following string SELECT a.foo, b.baz, a.bar FROM TABLE_A a LEFT JOIN TABLE_B b ON a.id = b.id WHERE baz = 'snafu'; I'd get back a data structure/object something like //fake PHPish $results['select-columns'] = Array[a.foo,b.baz,a.bar]; $results['tables'] = Array[TABLE_A,TABLE_B]; $results['table-aliases'] = Array[a=TABLE_A, b=TABLE_B]; //etc... Restated, I'm looking for the code in a database package that teases the SQL command apart so that the engine knows what to do with it. Searching the internet turns up a lot of results on how to parse a string WITH SQL. That's not what I want. I realize I could glop through an open source database's code to find what I want, but I was hoping for something a little more ready made, (although if you know where in the MySQL, PostgreSQL, SQLite source to look, feel free to pass it along) Thanks!

    Read the article

  • Configure WebLogic MDB to listen to Foreing AMQ Server

    - by eliel.lobo
    I'm trying to create an MDB(EJB 3.0) on WebLogic 10.3.5. to listen to a Queue in an external AMQ server. but after much work and combination of tutorials i get the followin error when deployin on WwebLogic. [EJB:015027]The Message-Driven EJB is transactional but JMS connection factory referenced by the JNDI name: ActiveMQXAConnectionFactory is not a JMS XA connection factory. Here is a brief of the work i have done: I have added the corresponding libraries to my WLS classpath (following thos tuturial http://amadei.com.br/blog/index.php/connecting-weblogic-and-activemq) and I have created the corresponding JMS Modules as indicated in the tutorial. As connection factory I have used ActiveMQConnectionFactory initially and ActiveMQXAConnectionFactory later, I also ignome the jms. notation an just put plain names as testQueue. Then create a simple MDB whit the following structure. I explicitly defined "connectionFactoryJndiName" property because otherwise it assumes a WebLogic connection factory which is not found an then raises an error. @MessageDriven( activationConfig = { @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"), @ActivationConfigProperty(propertyName = "destination", propertyValue = "testQueue"), @ActivationConfigProperty(propertyName = "connectionFactoryJndiName", propertyValue = "ActiveMQXAConnectionFactory") }, mappedName = "testQueue") public class ROMELReceiver implements MessageListener { /** * Default constructor. */ public ROMELReceiver() { // TODO Auto-generated constructor stub } /** * @see MessageListener#onMessage(Message) */ public void onMessage(Message message) { System.out.println("Message received"); } } At this point I'm stuck with the error mentioned above. Even though I use ActiveMQXAConnectionFactory instead of simply ActiveMQConnectionFactory, JNDI resources tree in web logic server shows org.apache.activemq.ActiveMQConnectionFactory as class for my configured connection factory. am i missing something? or is this just a completely wrong way to connect WebLogic whith AMQ? Thanks in advance.

    Read the article

  • Functional way to get a matrix from text

    - by Elazar Leibovich
    I'm trying to solve some Google Code Jam problems, where an input matrix is typically given in this form: 2 3 #matrix dimensions 1 2 3 4 5 6 7 8 9 # all 3 elements in the first row 2 3 4 5 6 7 8 9 0 # each element is composed of three integers where each element of the matrix is composed of, say, three integers. So this example should be converted to #!scala Array( Array(A(1,2,3),A(4,5,6),A(7,8,9), Array(A(2,3,4),A(5,6,7),A(8,9,0), ) An imperative solution would be of the form #!python input = """2 3 1 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 0 """ lines = input.split('\n') print lines[0] m,n = (int(x) for x in lines[0].split()) array = [] row = [] A = [] for line in lines[1:]: for elt in line.split(): A.append(elt) if len(A)== 3: row.append(A) A = [] array.append(row) row = [] from pprint import pprint pprint(array) A functional solution I've thought of is #!scala def splitList[A](l:List[A],i:Int):List[List[A]] = { if (l.isEmpty) return List[List[A]]() val (head,tail) = l.splitAt(i) return head :: splitList(tail,i) } def readMatrix(src:Iterator[String]):Array[Array[TrafficLight]] = { val Array(x,y) = src.next.split(" +").map(_.trim.toInt) val mat = src.take(x).toList.map(_.split(" "). map(_.trim.toInt)). map(a => splitList(a.toList,3). map(b => TrafficLight(b(0),b(1),b(2)) ).toArray ).toArray return mat } But I really feel it's the wrong way to go because: I'm using the functional List structure for each line, and then convert it to an array. The whole code seems much less efficeint I find it longer less elegant and much less readable than the python solution. It is harder to which of the map functions operates on what, as they all use the same semantics. What is the right functional way to do that?

    Read the article

  • Using hg repository as web site

    - by Tex
    This is somewhat related to my security question here. Is it a bad idea to use an hg / mercurial repository for a live website? If so, why? Furthermore, we have dev, test and production installations of our website, like dev.example.com, test.example.com and www.example.com. If it's a bad idea to use a repository for a live/production website, would it be OK to use an hg repository for the dev and test sites? I'm also concerned about ease of deployment. We have technical and less technical co-workers who will be working with the site. The technical guys (software engineers) won't have any problem working with the command line or TortoiseHG. I'm more concerned about the less technical guys (web designers). They won't be comfortable working on the command line, and may even find TortoiseHG daunting. These guys mostly upload .css files and images to the server. I'd like for these files (at least the .css files) to be under version control, but I want this to be as transparent as possible for the non technical guys. What's the best way to achieve this? Edit: Our 'site' is actually a multi-site CMS setup with a main repository and several subrepositories. Mock-up of the repository structure: /root [main repository containing core files and subrepositories] /modules [modules subrepository] /sites/global [subrepository for global .css and .php files] /sites/site1 [site1 subrepository] ... /sites/siteN [siteN subrepository] Software engineers would work in the root, modules and sites/global repositories. Less technical guys (web designers) would work only in the site1 ... siteN subrepositories.

    Read the article

  • What's the best way to develop a debugging window for an ajax ASP.Net MVC application

    - by KallDrexx
    While developing my ASP.NET MVC, I have started to see the need for a debugging console window to assist in figuring out what is going right and wrong in my code. I read the last few chapters of the Pro Asp.net MVC book, and the author details how to use http modules to show page load/creation times and linq to sql query logs, both of which I definitely want to be able to see. However, since I am loading a lot of small sections of my page individually with ajax I don't want the debug information right there in the middle of my screen. So the idea I came up with was to have a separate browser window (open-able by a link or some javascript) with a console log, that can contain logged entries both from javascript and from the asp.net mvc run. The former should be relatively easy, but I'm having trouble coming up with a way to log the asp.net information in ajax requests. The direction I have been thinking of going is to create an httpmodule (like the Pro MVC book does), and have that module contain some that append the javascript's log to console calls with the messages. The issue I see with this is finding a way to get the log messages from the controller's action methods to the httpmodule's methods. The only way I see to do this is with a singleton, but I'm not sure if singletons are bad practice for a stateless web application. Furthermore, it seems like if I return json with my ajax calls (instead of pure html) then that won't work at all anyways and unless there is a way to add data to an existing json structure inside the httpmodule. How does everyone else handle this type of debugging in heavily ajax applications? For reference, the javascript library I am using is jquery.

    Read the article

< Previous Page | 659 660 661 662 663 664 665 666 667 668 669 670  | Next Page >