Search Results

Search found 36 results on 2 pages for 'julia'.

Page 1/2 | 1 2  | Next Page >

  • Entity Framework 4.0: My Favorite Books

    - by nannette
    I'm in the process of reading several Entity Framework 4.0 books. I'm going to recommend two such books: 1) Programming Entity Framework: Building Data Centric Apps with the ADO.NET Entity Framework by Julia Lerman 2) Entity Framework 4.0 Recipes: A Problem-Solution Approach by Larry Tenny and Zeeshan Hirani Visit these Entity Framework 4.0 Quick Start videos by Julia Lerman. The book links include numerous detailed reviews. If you can only afford one of the books, I'd recommend Julia's book as the...(read more)

    Read the article

  • Java Audit table logging, MySQL equivalent of CONTEXT_INFO.

    - by Julia
    Hi, I am looking for the MySQL equivalent of CONTEXT_INFO that is present in SQL Server. Or any other session variable like thing using which I can pass the username to the trigger. I am currently working on logging table data for audit. I need to pass the username of the logged in user to the delete trigger. Any ideas? We are deleting the rows from the table in a few cases and marking them as deleted in others. Any alternate solutions are welcome. I thought of using AOP but it could prove problematic when deleting a cascade. I want to look into Hibernate Interceptors, not sure at this point if that works. If I can find the MySQL equivalent of CONTEXT_INFO, my job is done and elegant as well. Thanks, Julia.

    Read the article

  • Multi language site - use of canonical link and link rel="alternate"

    - by julia
    I keep reading everywhere that if you have a multilanguage site, where the same page appears in, say, French and English, then this is considered as duplicate content by google. It is written that using canonical link is the solution, but I do not understand how to use it in this case. Should I: Choose either French URL or English URL to be the canonical (main) one, and where I will place the canonical link? If so, how do I decide which of the two URLs must be canonical? both languages are important to me and I want the content under both languages to be indexed by google and served to the user, depending on the language in which he searches. OR should I place a canonical link on both French and English URLs? If so, then I do not understand the meaning of using the canonical link? In this case would both URLs be indexed, are both of them considered as "important" by google and not duplicates? Also I read that link rel="alternate" can be used to indicate to google that, for example the French URL is the French-language equivalent of the English page. This makes sense and I understand how to use such links, but how are they combined with canonical links? Should I define both the canonical URL AND specify rel="alternate" in both URLs? Could someone help me to clarify this, cause I'm stuck with this and can't seem to find a good-enough explanation in different sources.

    Read the article

  • Facebook and Gmail stop working after 10 minutes

    - by Julia
    I have problem with facebook and gmail only: It works fine and lets me log in, view photos and videos, read new messages, etc. But after 5-10 minuets it doesn't load at all This webpage is not available. The webpage at http://www.facebook.com/ might be temporarily down or it may have moved permanently to a new web address. More information on this error Below is the original error message Error 101 (net::ERR_CONNECTION_RESET): Unknown error After deleting cookies this problem disappears for 5-10 minuets, then I get the same error. It happens with Google Chrome and Firefox. Ping works fine. I have checked System-Preferences-Network Proxy it sets on Default - "Direct Internet Connection". Then I have run test (chrome://net-internals/#tests) and got some FAIL results Use system proxy settings Disable IPv6 host resolving Probe for IPv6 host resolving IPv6 is disabled.

    Read the article

  • Linq find differences in two lists

    - by Salo
    I have two list of members like this: Before: Peter, Ken, Julia, Tom After: Peter, Robert, Julia, Tom As you can see, Ken is is out and Robert is in. What I want is to detect the changes. I want a list of what has changed in both lists. How can linq help me?

    Read the article

  • YUM error. Is this a cert error

    - by Julia Roberts
    Nov 13 13:38:57 host abrt: detected unhandled Python exception in '/usr/bin/yum' Nov 13 13:38:57 host abrtd: New client connected Nov 13 13:38:57 host abrt-server[3508]: Saved Python crash dump of pid 3151 to /var/spool/abrt/pyhook-2012-11-13-13:38:57-3151 Nov 13 13:38:57 host abrtd: Directory 'pyhook-2012-11-13-13:38:57-3151' creation detected Nov 13 13:38:57 host abrtd: Can't load public GPG key /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta Nov 13 13:38:57 host abrtd: Can't load public GPG key /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-legacy-former Nov 13 13:38:57 host abrtd: Can't load public GPG key /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-legacy-release Nov 13 13:38:57 host abrtd: Can't load public GPG key /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-legacy-rhx Nov 13 13:38:57 host abrtd: Can't load public GPG key /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release Nov 13 13:38:57 host abrtd: Package 'yum' isn't signed with proper key Nov 13 13:38:57 host abrtd: 'post-create' on '/var/spool/abrt/pyhook-2012-11-13-13:38:57-3151' exited with 1 Nov 13 13:38:57 host abrtd: Corrupted or bad directory /var/spool/abrt/pyhook-2012-11-13-13:38:57-3151, deleting There is also nothing in the crash dump file. Ideas? yum update Loaded plugins: fastestmirror, rhnplugin, security An error has occurred: Internal Server Error See /var/log/up2date for more information Is yum broken

    Read the article

  • IIS 7.0 - responses throttled to 500ms blocks?

    - by Julia Hayward
    Scenario: ASP.NET MVC wep app sitting on my local machine (Vista Ultimate, IIS 7.0), nothing going on except one user (me) logged in and viewing an index page. The page includes 9 dynamic images drawn from the underlying DB and returned from a controller action. I have got the actual processing time for these images down to 15ms each. Turn on Firebug and watch the page load. What I see is 9 requests for images firing off together – no surprise – but four come back to me almost immediately; two more after 0.5s; another after 1s; then at 1.5s and 2s. Logging on the server side suggests the individual responses are still only taking 15ms. So it appears IIS is queueing things up into 500ms chunks. (Repeating the experiment produces different results, but each time the images return in similar blocks – you might get three in the first group, then three at 0.5s, two at 1s etc, for example – and it’s always at 500ms intervals, not anything else.) It’s also repeatable cross-browser, and it’s not repeatable with other forms of content. I haven't found any particular mention of this problem out there, so I'm sort of assuming it's not an IIS bug, so is it: i) IIS on desktop OSs deliberately does it, to make you use server OSs in production? ii) There is some magical setting that has eluded me for as long as I’ve known IIS? iii) Something peculiar to MVC or SQL Server 2008? or something else?

    Read the article

  • Free memory on linux [closed]

    - by Julia Roberts
    Possible Duplicate: Meaning of the buffers/cache line in the output of free What would be a good setting to free memory on linux? I have 8GB but gets used up so fast. current settings: kernel.sched_min_granularity_ns = 10000000 kernel.sched_wakeup_granularity_ns = 15000000 vm.dirty_ratio = 40 kernel.pid_max = 4096 vm.bdflush = 100 1200 128 512 15 5000 500 1884 2 What settings would I need so linux frees old ram faster?

    Read the article

  • Questions and considerations to ask client for designing a database

    - by Julia
    Hi guys! so as title says, I would like to hear your advices what are the most important questions to consider and ask end-users before designing database for their application. We are to make database-oriented app, with special attenion to pay on db security (access control, encryption, integrity, backups)... Database will also keep some personal information about people, which is considered sensitive by law regulations, so security must be good. I worked on school projects with databases, but this is first time working "in real world", where this db security has real implications. So I found some advices and questions to ask on internet, but here I always get best ones. All help appreciated! Thank you!

    Read the article

  • create TableModel and populate jTable dynamically

    - by Julia
    Hi all! I want to store the results of reading lucene index into jTable, so that I can make it sortable by different columns. From index I am reading terms with different measures of their frequencies. Table columns are these : [string term][int absFrequency][int docFrequency][double invFrequency] So i in AbstractTableModel I can define column names, but i dont know how to get the Object[][]data with results from the following method: public static void FrequencyMap(Directory indexDir) throws Exception { List<Object>redoviLista = new ArrayList<Object>(); //final Map<String,TermRow> map = new TreeMap<String,TermRow>(); List<String>termList = new ArrayList<String>(); IndexReader iReader = IndexReader.open(indexDir); FilterIndexReader fReader = new FilterIndexReader(iReader); int numOfDocs = fReader.numDocs(); TermEnum terms = fReader.terms(); while (terms.next()){ Term term = terms.term(); String termText = term.text(); termList.add(termText); //Calculating the frequencies int df = iReader.docFreq(term); double idf = 0.0F; idf = Math.log10((double) numOfDocs / df); double tfidf = (df*idf); //Here comes important part Object oneTableRow[] = {termText, df, idf, tfidf}; redoviLista.add(jedanRed); // So i thaught to store them into list, and then later to copy into table, but didnt manage } iReader.close(); // So I need something like this, and i Neeed this array to be stored out of this method Object[][]data = new Object[redoviLista.size()][]; for (int i = 0; i < data.length; i++) { data[i][0] = redoviLista.get(i); } So I am kindda stuck here to proceed to implement AbstractTableModel and populate and display this table .... :/ Please help!

    Read the article

  • What are the reasons to store documents into DBMS when using Alfresco CMS

    - by Julia
    Hello guys! I have interview for an internship with company that wants to implement document management system and they are considering on the first place open source solutions, their top choice being Alfresco, but decision is still not final, part of my work there would be to investigate is Alfresco the best solution. What I have seen from project description, is that they would implement Alfresco with MySQL database, and not to use DBMS just for document metadata and indexing, but they actually want to store documents inside. By company profile, type of documents would be mostly PDF and .doc, not images. I have researched a bit, and I have read all the topics here related to storing files into the database, not to duplicate a question. So from what I understand, storing BLOBS is generally not recomendable, and by the profile of the company and their legal obligations with archiving, I see they will have to store larger amount of docs. I would like to be ready as much as I can for the interview and that is why I would like your opinion on these questions: What will be your reasons for deciding to store documents into the DBMS, (especially having in mind that you are installing Alfresco, which stores files in the FS)??? Do you have any experiences with storing documents into the MySQL database specifically??? All the help is very much appreciated, I am really excited about interview and really want this internship, so this is one of things i really want to understand before!! Thank you!!!!

    Read the article

  • What are the reasons to store documents into DBMS when using Alfresco DMS

    - by Julia
    Hello guys! I have interview for an internship with company that wants to implement document management system and they are considering on the first place open source solutions, their top choice being Alfresco, but decision is still not final, part of my work there would be to investigate is Alfresco the best solution. What I have seen from project description, is that they would implement Alfresco with MySQL database, and not to use DBMS just for document metadata and indexing, but they actually want to store documents inside. By company profile, type of documents would be mostly PDF and .doc, not images. I have researched a bit, and I have read all the topics here related to storing files into the database, not to duplicate a question. So from what I understand, storing BLOBS is generally not recomendable, and by the profile of the company and their legal obligations with archiving, I see they will have to store larger amount of docs. I would like to be ready as much as I can for the interview and that is why I would like your opinion on these questions: 1) What will be your reasons for deciding to store documents into the DBMS, (especially having in mind that you are installing Alfresco, which stores files in the FS)??? 2) Do you have any experiences with storing documents into the MySQL database specifically??? All the help is very much appreciated, I am really excited about interview and really want this internship, so this is one of things i really want to understand before!! Thank you!!!!

    Read the article

  • Get highest frequency terms from Lucene index

    - by Julia
    Hello! i need to extract terms with highest frequencies from several lucene indexes, to use them for some semantic analysis. So, I want to get maybe top 30 most occuring terms(still did not decide on threshold, i will analyze results) and their per-index counts. I am aware that I might lose some precision because of potentionally dropped duplicates, but for now, lets say i am ok with that. So for the proposed solutions, (needless to say maybe) speed is not important, since I would do static analysis, I would put accent on simplicity of implementation because im not so skilled with Lucene (not the programming guru too :/ ) and cant wrap my mind around many concepts of it.. I can not find any code samples from something similar, so all concrete advices (code, pseudocode, links to code samples...) I will apretiate very much!!! Thank you!

    Read the article

  • Exception when indexing text documents with Lucene, using SnowballAnalyzer for cleaning up

    - by Julia
    Hello!!! I am indexing the documents with Lucene and am trying to apply the SnowballAnalyzer for punctuation and stopword removal from text .. I keep getting the following error :( IllegalAccessError: tried to access method org.apache.lucene.analysis.Tokenizer.(Ljava/io/Reader;)V from class org.apache.lucene.analysis.snowball.SnowballAnalyzer Here is the code, I would very much appreciate help!!!! I am new with this.. public class Indexer { private Indexer(){}; private String[] stopWords = {....}; private String indexName; private IndexWriter iWriter; private static String FILES_TO_INDEX = "/Users/ssi/forindexing"; public static void main(String[] args) throws Exception { Indexer m = new Indexer(); m.index("./newindex"); } public void index(String indexName) throws Exception { this.indexName = indexName; final File docDir = new File(FILES_TO_INDEX); if(!docDir.exists() || !docDir.canRead()){ System.err.println("Something wrong... " + docDir.getPath()); System.exit(1); } Date start = new Date(); PerFieldAnalyzerWrapper analyzers = new PerFieldAnalyzerWrapper(new SimpleAnalyzer()); analyzers.addAnalyzer("text", new SnowballAnalyzer("English", stopWords)); Directory directory = FSDirectory.open(new File(this.indexName)); IndexWriter.MaxFieldLength maxLength = IndexWriter.MaxFieldLength.UNLIMITED; iWriter = new IndexWriter(directory, analyzers, true, maxLength); System.out.println("Indexing to dir..........." + indexName); if(docDir.isDirectory()){ File[] files = docDir.listFiles(); if(files != null){ for (int i = 0; i < files.length; i++) { try { indexDocument(files[i]); }catch (FileNotFoundException fnfe){ fnfe.printStackTrace(); } } } } System.out.println("Optimizing...... "); iWriter.optimize(); iWriter.close(); Date end = new Date(); System.out.println("Time to index was" + (end.getTime()-start.getTime()) + "miliseconds"); } private void indexDocument(File someDoc) throws IOException { Document doc = new Document(); Field name = new Field("name", someDoc.getName(), Field.Store.YES, Field.Index.ANALYZED); Field text = new Field("text", new FileReader(someDoc), Field.TermVector.WITH_POSITIONS_OFFSETS); doc.add(name); doc.add(text); iWriter.addDocument(doc); } }

    Read the article

  • Need help implementing this algorithm with map Hadoop MapReduce

    - by Julia
    Hi all! i have algorithm that will go through a large data set read some text files and search for specific terms in those lines. I have it implemented in Java, but I didnt want to post code so that it doesnt look i am searching for someone to implement it for me, but it is true i really need a lot of help!!! This was not planned for my project, but data set turned out to be huge, so teacher told me I have to do it like this. EDIT(i did not clarified i previos version)The data set I have is on a Hadoop cluster, and I should make its MapReduce implementation I was reading about MapReduce and thaught that i first do the standard implementation and then it will be more/less easier to do it with mapreduce. But didnt happen, since algorithm is quite stupid and nothing special, and map reduce...i cant wrap my mind around it. So here is shortly pseudo code of my algorithm LIST termList (there is method that creates this list from lucene index) FOLDER topFolder INPUT topFolder IF it is folder and not empty list files (there are 30 sub folders inside) FOR EACH sub folder GET file "CheckedFile.txt" analyze(CheckedFile) ENDFOR END IF Method ANALYZE(CheckedFile) read CheckedFile WHILE CheckedFile has next line GET line FOR(loops through termList) GET third word from line IF third word = term from list append whole line to string buffer ENDIF ENDFOR END WHILE OUTPUT string buffer to file Also, as you can see, each time when "analyze" is called, new file has to be created, i understood that map reduce is difficult to write to many outputs??? I understand mapreduce intuition, and my example seems perfectly suited for mapreduce, but when it comes to do this, obviously I do not know enough and i am STUCK! Please please help.

    Read the article

  • Load test results VSTS 2008

    - by Julia
    Can anybody explain why load test result on Graph are different form load test result on Table Page. If compare Min and Avg page response time they are the same. But if we compare figures in Max column they are different for the same page and the same load rest run. see linked images for more details. http://cid-ee8b34c203174724.skydrive.live.com/self.aspx/.Public/3.png

    Read the article

  • "no inclosing instance error " while getting top term frequencies for document from Lucene index

    - by Julia
    Hello ! I am trying to get the most occurring term frequencies for every particular document in Lucene index. I am trying to set the treshold of top occuring terms that I care about, maybe 20 However, I am getting the "no inclosing instance of type DisplayTermVectors is accessible" when calling Comparator... So to this function I pass vector of every document and max top terms i would like to know protected static Collection getTopTerms(TermFreqVector tfv, int maxTerms){ String[] terms = tfv.getTerms(); int[] tFreqs = tfv.getTermFrequencies(); List result = new ArrayList(terms.length); for (int i = 0; i < tFreqs.length; i++) { TermFrq tf = new TermFrq(terms[i], tFreqs[i]); result.add(tf); } Collections.sort(result, new FreqComparator()); if(maxTerms < result.size()){ result = result.subList(0, maxTerms); } return result; } /Class for objects to hold the term/freq pairs/ static class TermFrq{ private String term; private int freq; public TermFrq(String term,int freq){ this.term = term; this.freq = freq; } public String getTerm(){ return this.term; } public int getFreq(){ return this.freq; } } /*Comparator to compare the objects by the frequency*/ class FreqComparator implements Comparator{ public int compare(Object pair1, Object pair2){ int f1 = ((TermFrq)pair1).getFreq(); int f2 = ((TermFrq)pair2).getFreq(); if(f1 > f2) return 1; else if(f1 < f2) return -1; else return 0; } } Explanations and corrections i will very much appreciate, and also if someone else had experience with term frequency extraction and did it better way, I am opened to all suggestions! Please help!!!! Thanx!

    Read the article

  • PDF printing in java

    - by Julia
    Hello, is there a way to print pdf files from a java webapplication on the local printer of the end user (connected via vpn)? The simple lookup of a printer via Java Printing Service always returns printer which are not able to print pdfs. Are there other libs which can be used for printing in java? By the way, just opening the pdf in the browser is not an option, though it must be possible to run scheduled batch printing without user interaction. Thanks in advance

    Read the article

  • Need help implementing this algorithm with map reduce(hadoop)

    - by Julia
    Hi all! i have algorithm that will go through a large data set read some text files and search for specific terms in those lines. I have it implemented in Java, but I didnt want to post code so that it doesnt look i am searching for someone to implement it for me, but it is true i really need a lot of help!!! This was not planned for my project, but data set turned out to be huge, so teacher told me I have to do it like this. I was reading about MapReduce and thaught that i first do the standard implementation and then it will be more/less easier to do it with mapreduce. But didnt happen, since algorithm is quite stupid and nothing special, and map reduce...i cant wrap my mind around it. So here is shortly pseudo code of my algorithm LIST termList (there is method that creates this list from lucene index) FOLDER topFolder INPUT topFolder IF it is folder and not empty list files (there are 30 sub folders inside) FOR EACH sub folder GET file "CheckedFile.txt" analyze(CheckedFile) ENDFOR END IF Method ANALYZE(CheckedFile) read CheckedFile WHILE CheckedFile has next line GET line FOR(loops through termList) GET third word from line IF third word = term from list append whole line to string buffer ENDIF ENDFOR END WHILE OUTPUT string buffer to file Also, as you can see, each time when "analyze" is called, new file has to be created, i understood that map reduce is difficult to write to many outputs??? I understand mapreduce intuition, and my example seems perfectly suited for mapreduce, but when it comes to do this, obviously I do not know enough and i am STUCK! Please please help.

    Read the article

  • Mejores prácticas de Recursos Humanos: Cross Company Mentoring

    - by Fabian Gradolph
    Una de las cosas positivas de trabajar en una gran organización como Oracle es la posibilidad de participar en iniciativas de gran alcance que normalmente no están disponibles en muchas empresas. Ayer se presentó, junto con American Express y CocaCola, la tercera edición del programa Cross Company Mentoring, una iniciativa en la que las tres empresas colaboran facilitando mentores (profesionales experimentados) para promover el desarrollo profesional de individuos de talento en las tres empresas. La originalidad del programa estriba en que los mentores colaboran con el desarrollo de los profesionales de las otras empresas participantes y no sólo con los propios. La presentación inicial fue realizada por Alfredo García-Valverde, presidente de American Express en España. Posteriormente, Julia B. López, de American Express, y Rosa María Arias, de Oracle (en ese orden en la foto), han detallado en qué consiste la iniciativa, además de hacer balance de la edición anterior. Aunque este programa -complementario de los que ya funcionan en las tres empresas- está disponible para hombres y mujeres, hay que destacar que buena parte de su razón de ser está en potenciar el papel de mujeres profesionales de talento en las compañías. En términos generales, todas las grandes organizaciones se encuentran con un problema similar en el desarrollo del talento femenino. Independientemente del número de mujeres que formen parte de la plantilla de la empresa, lo cierto es que su número decrece de forma drástica cuando hablamos de los puestos directivos. La ruptura de ese "techo de cristal" es una prioridad para las empresas, tanto por motivos de simple justicia social, como por aprovechar al máximo todo el potencial del talento que ya existe dentro de las organizaciones, evitando que el talento femenino se "pierda" por no poder facilitar las oportunidades adecuadas para su desarrollo. La iniciativa de Cross Company Mentoring tiene unos objetivos bien definidos. En primer lugar, desarrollar el talento con un método innovador que permite conocer las mejores prácticas en otras empresas y aprovechar el talento externo. Adicionalmente, como ha señalado Julia López, es un método que nos fuerza a salir de la zona de confort, de las prácticas tradicionalmente aceptadas dentro de cada organización y que difícilmente se ponen en cuestión. El segundo objetivo es que el Mentee, el máximo beneficiario del programa, aprenda de la experiencia de profesionales de gran trayectoria para desarrollar sus propias soluciones en los retos que le plantee su carrera profesional. El programa que se ha presentado ahora, la tercera edición, arrancará en el próximo mes y estará vigente hasta finales de año. Seguro que tendrá tanto éxito como en las dos ediciones anteriores.

    Read the article

  • Google+ Platform Office Hours: Mobile

    Google+ Platform Office Hours: Mobile This week the Google+ Platform Office Hours went mobile. Julia and Chirag as they showed Jenny three ways to share to Google+ from Android. 1:21 - Session agenda 2:20 - Sharing text and an image with the share intent 5:25 - Share with the Google+ mobile application 7:25 - Take and share a photo with the built in camera 12:08 - A question about the various Google messaging services on Android - Send feedback - goo.gl 13:05 - When does Google Play Services come out? From: GoogleDevelopers Views: 1630 29 ratings Time: 14:57 More in Science & Technology

    Read the article

  • SPARC Power Management Article at OTN

    - by nospam(at)example.com (Joerg Moellenkamp)
    My colleague Karoly Vegh pointed in a tweet to a really interesting article about the usage of Power Management of SPARC T-series systems. The article explains how to use the power management, how it works, what it's able to do and how to use it in a dynamic fashion according to anticipated load patterns. You find the article "How to Use the Power Management Controls on SPARC Servers" written by by Bruce Evans, Julia Harper, and Terry Whatley on OTN.

    Read the article

1 2  | Next Page >