Search Results

Search found 8800 results on 352 pages for 'import'.

Page 4/352 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Problem about python import with error

    - by xiao
    Hello, I have write a small python module with one class and two functions. The skeleton of the module is as following: #file name: test_module.py class TestClass: @classmethod def method1(cls, param1): #to do something pass def __init__(self, param1): #to do something ... def fun1(*params): #to do something ... def fun2(*params): #to do something ... Another py file is a small script which imports function and class from the module, as following: import sys from test_module import TestClass, fun1, fun2 def main(sys_argv): li = range(5) inst1 = TestClass(li) fun1(inst1) fun2(inst1) return if __name__ == "__main__": main(sys.argv) But when I execute the script, it is broken with following message: ./script.py: line 4: syntax error near unexpected token `(' ./script.py: line 4: `def main(sys_argv):' I am not sure what the problem is. Is it a problem with import? But when I try to import the module in ipython, everything is just ok.

    Read the article

  • Import a python module without the .py extension

    - by compie
    I have a file called foobar (without .py extension). In the same directory I have another python file that tries to import it: import foobar But this only works if I rename the file to foobar.py. Is it possible to import a python module that doesn't have the .py extension?

    Read the article

  • Pyjamas import statements

    - by Gordon Worley
    I'm starting to use Pyjamas and I'm running into some annoyances. I have to import a lot of stuff to make a script work well. For example, to make a button I need to first from pyjamas.ui.Button import Button and then I can use Button. Note that import pyjamas.ui.Button and then using Button.Button doesn't work (results in errors when you build to JavaScript, at least in 0.7pre1). Does anyone have a better example of a good way to do the import statements in Pyjamas than what the Pyjamas folks have on their site? Doing things their way is possible, but ugly and overly complicated from my perspective, especially when you want to use a dozen or more ui components.

    Read the article

  • Eclipse: Double semi-colon on an import

    - by smp7d
    Using Eclipse, if I have an extra semicolon on an import line (not the last import line), I see a syntax error in the IDE. However, this compiles fine outside of the IDE (Maven in this case). Example: import java.util.ArrayList;; //notice extra semicolon import java.util.List; Does anyone else see this behavior? Why is this showing as a syntax error? I am working with someone who keeps pushing these this to source control and it is irritating me (they clearly aren't using Eclipse). Full disclosure... I am using SpringSource Tool Suite 2.8.0.

    Read the article

  • Crossed import in django

    - by Kuhtraphalji
    On example, i have 2 apps: alpha and beta in alpha/models.py import of model from beta.models and in beta/models.py import of model from alpha.models manage.py validate says that ImportError: cannot import name ModelName how to solve this problem?

    Read the article

  • How to import a module from PyPI when I have another module with the same name

    - by kuzzooroo
    I'm trying to use the lockfile module from PyPI. I do my development within Spyder. After installing the module from PyPI, I can't import it by doing import lockfile. I end up importing anaconda/lib/python2.7/site-packages/spyderlib/utils/external/lockfile.py instead. Spyder seems to want to have the spyderlib/utils/external directory at the beginning of sys.path, or at least none of the polite ways I can find to add my other paths get me in front of spyderlib/utils/external. I'm using python2.7 but with from __future__ import absolute_import. Here's what I've already tried: Writing code that modifies sys.path before running import lockfile. This works, but it can't be the correct way of doing things. Circumventing the normal mechanics of importing in Python using the imp module (I haven't gotten this to work yet, but I'm guessing it could be made to work) Installing the package with something like pip install --install-option="--prefix=modules_with_name_collisions" package_name. I haven't gotten this to work yet either, but I'm guess it could be made to work. It looks like this option is intended to create an entirely separate lib tree, which is more than I need. Source Using pip install --target=lockfile_from_pip. The files show up in the directory where I tell them to go, but import doesn't find them. And in fact pip uninstall can't find them either. I get Cannot uninstall requirement lockfile-from-pip, not installed and I guess I will just delete the directories and hope that's clean. Source So what's the preferred way for me to get access to the PyPI lockfile module?

    Read the article

  • php import larg table to phpmyadmin database

    - by safaali
    hi, I am so worry :( I dropped one of the tables from the database accidentally. fortunately, I have back-up. (I have used the "Auto backup for mysql") The back-up of the table is stored as .txt file (56 Megabytes) on my PC. I tried to import it by PhpMyAdmin and the import failed because the file is too large to import. then I uploaded the file to /home/tablebk directory. I have some experiences in php. I know that I would import it with this code, but i don't know the sql statment for this import. what is have to put as $line variable? please help me :( :( <?php $dbhost = 'localhost'; $dbuser = 'mysite'; $dbpw = 'password'; $dbname = 'databasename'; $file = @fopen('country.txt', 'r'); if ($file) { while (!feof($file)) { $line = trim(fgets($file)); $flag = mysql_query($line); if (isset($flag)) { echo 'Insert Successfully<br />'; } else { echo mysql_error() . '<br/>'; } flush(); } fclose($file); } echo '<br />End of File'; ?>

    Read the article

  • Win32 C++ Import path based on OS?

    - by Zenox
    I'm working with some legacy code that has an import like so: #import "C:\Program Files\Common Files\System\ado\msado15.dll" rename("EOF", "EndOfFile") The problem is, on a x64 machine the path for this import is in the 'Program Files (x86)' directory. Is there a preprocessor macro I can wrap around this to make it work on either? Edit: I think I found it. _M_X64, but im not 100% sure if this is correct.

    Read the article

  • How do I import Amazon MP3s with Banshee and the new Amazon Cloud Player?

    - by adempewolff
    Banshee's Amazon MP3 Import extension until recently allowed seamless importing of songs purchased from Amazon MP3. It did this by a)opening .amz files and using them to connect to and download the purchased files from Amazon's servers, and b) using hooks in Banshee's built-in browser to automatically recognize and open the .amz files when clicked on in the browser. However, recently this functionality stopped working. Banshee will display Contacting Server in the lower left hand corner for a little while and then stop. Furthermore opening the Amazon Cloud Player in the Banshee browser or any other browser on a Linux system to manually download the .amz file now results in the message: On Linux systems, Cloud Player only supports downloading songs one at a time. To download your music, deselect all checkboxes, select the checkbox for the song you want to download, then click the "Download" button. How can I get around this and import my purchased music into Banshee as I used to?

    Read the article

  • How to import a pdf in libreoffice? under ubuntu, all pages are blank

    - by Daniele
    I have some .pdf generated by a scanner, that I want to import in LibreOffice and do some small editing. The PDF has only one object per page, a page-size image. If I open it in LibreOffice under Ubuntu 12.10, it imports "successfully" but all pages are blank. I have the libreoffice-pdfimport package installed. That is true with both LibreOffice 3.6 (part of Ubuntu 12.10) and with 4.0.2, from libreoffice ppa. The same .pdf files open perfectly fine on both LibreOffice for Windows and LibreOffice for Mac (yes, I have three computers with all three OSes), but on Ubuntu 12.10, all pages are blank, so I can only conclude this is an issue with Ubuntu packaging, or something really weird prevents it from working under linux. How can I import these kinds of .pdf into LibreOffice for editing?

    Read the article

  • Set CSV import default to UTF-8 in Calc

    - by picca
    Every time I open a CSV (comma separated values) document in OpenOffice.org Calc I get a dialog with CSV preferences. The current default character set is "Eastern Europe (ISO-8859-2)". I would like "UTF-8" to be selected by default instead.

    Read the article

  • Import Firefox passwords into KeePassX or KeePass2

    - by rubo77
    I have an XML export of my Firefox Passwords in the form (I replaced real passwords with *): <xml> <entries ext="Password Exporter" extxmlversion="1.1" type="saved" encrypt="false"> <entry host="chrome://weave" user="****" password="****" formSubmitURL="" httpRealm="Mozilla Services Password" userFieldName="" passFieldName=""/> <entry host="chrome://weave" user="****" password="****" formSubmitURL="" httpRealm="Mozilla Services Encryption Passphrase" userFieldName="" passFieldName=""/> <entry host="http://www.example.de" user="rubo77" password="****" formSubmitURL="http://www.example.de" httpRealm="" userFieldName="benutzername" passFieldName="passwort"/> <entry host="http://example2.de" user="qqq" password="pppp" formSubmitURL="http://example2.de" httpRealm="" userFieldName="username" passFieldName="pass"/> ... Can I somehow convert this into a form KeePassX understands?

    Read the article

  • Excel CSV import treating quoted strings of numbers as numeric values, not strings

    - by MichaelOryl
    I've got a web application that is exporting its data to a CSV file. Here's one example row of the CSV file in question: 28,"65154",02/21/2013 00:00,"false","0316295","8316012,8315844","MALE" Since I can't post an image, I'll have to explain the results in Excel. The "0316295" field gets turned into a number and the leading 0 goes away. The "8316012,8315844" gets interpreted as one single number: 83,160,128,315,844. That is, most obviously, not the intended result. I've seen people recommend a leading single quote for such cases, but that doesn't really work either. 28,"65154",02/21/2013 00:00,"false","'0316295","'8316012,8315844","MALE" The single quote is visible at all times in the cell in Excel, though if I enter a number with a leading single quote myself, it shows just the intended string and not the single quote with the string. Importing is not the same as typing, it seems. Anybody have a solution here?

    Read the article

  • Excel 2007 save import steps on csv file?

    - by Chris Marisic
    I have a csv file that constantly needs opened into Excel and then have the data copied over to a separate workbook. I find the process of having to click through all of the dialogs, setting the text identifier, setting the columns to all be text extremely tedious. In many actions with data like this in regards to MSSQL or Access the program will ask you if you wish to save these steps however Excel doesn't readily ask that. Is there any way to get a comparable usage with Excel?

    Read the article

  • Import data in Excel that doesn't have a row delimiter, but number of columns is known

    - by Alex B
    So i have this text file that looks something like this: Header1 Header2 Header3 Header4 A1 B1 C1 D1 A2 B2 C2 D2 and so on. When imported, I'd want the data to format itself in 4 columns. I tried the Get External Data from Text, and it successfully imports it, but it doesn't wrap it around, so it just keeps making columns for every space. I'd want it to go on the next line after 4 (in this case) elements have been added. What's the simplest way to achieve this? EDIT: My answer follows, since I'm not yet allowed to answer my own questions yet. The Excel function I needed is called indirect(). Not sure how it actually works though, so hopefully someone can help out with that, but the function call that worked for me is =INDIRECT(ADDRESS((ROW(A1)-1)*4+COLUMN(A1),1)) which i found over here: http://www.ozgrid.com/forum/showthread.php?t=101584&p=456031#post456031 Note: this required me to add the text to excel where i'd get this row full of columns, and then flip it so that i'd have a column full of rows.

    Read the article

  • Best Practices for High Volume CPA Import Operations with ebXML in B2B 11g

    - by Shub Lahiri, A-Team
    Background B2B 11g supports ebXML messaging protocol, where multiple CPAs can be imported via command-line utilities.  This note highlights one aspect of the best practices for import of CPA, when large numbers of CPAs in the excess of several hundreds are required to be maintained within the B2B repository. Symptoms The import of CPA usually is a 2-step process, namely creating a soa.zip file using b2bcpaimport utility based on a CPA properties file and then using b2bimport to import the b2b repository.  The commands are provided below: ant -f ant-b2b-util.xml b2bcpaimport -Dpropfile="<Path to cpp_cpa.properties>" -Dstandard=true ant -f ant-b2b-util.xml b2bimport -Dlocalfile=true -Dexportfile="<Path to soa.zip>" -Doverwrite=true Usually the first command completes fairly quickly regardless of the number of CPAs in the repository. However, as the number of trading partners within the repository goes up, the time to complete the second command could go up to ~30 secs per operation. So, this could add up to a significant amount, if there is a need to import hundreds of CPA in a production system within a limited downtime, maintenance window.  Remedy In situations, where there is a large number of entries to be imported, it is best to setup a staging environment and go through the import operation of each individual CPA in an empty repository. Since, this will be done in an empty repository, the time taken for completion should be reasonable.  After all the partner profiles have been imported, a full repository export can be taken to capture the metadata for all the entries in one file.  If this single file with all the partner entries is imported in a loaded repository, the total time taken for import of all the CPAs should see a dramatic reduction. Results Let us take a look at the numbers to see the benefit of this approach. With a pre-loaded repository of ~400 partners, the individual import time for each entry takes ~30 secs. So, if we had to import another 100 partners, the individual entries will take ~50 minutes (100 times ~30 secs). On the other hand, if we prepare the repository export file of the same 100 partners from a staging environment earlier, the import takes about ~5 mins. The total processing time for the loading of metadata, specially in a production environment, can thus be shortened by almost a factor of 10. Summary The following diagram summarizes the entire approach and process. Acknowledgements The material posted here has been compiled with the help from B2B Engineering and Product Management teams.

    Read the article

  • Script/tool to import series of snapshots, each being a new edition, into GIT, populating source tree?

    - by Rob
    I've developed code locally and taken a fairly regular snapshot whenever I reach a significant point in development, e.g. a working build. So I have a long-ish list of about 40 folders, each folder being a snapshot e.g. in ascending date YYYYMMDD order, e.g.:- 20100523 20100614 20100721 20100722 20100809 20100901 20101001 20101003 20101104 20101119 20101203 20101218 20110102 I'm looking for a script to import each of these snapshots into GIT. The end result being that the latest code is the same as the last snapshot, and other editions are accessible and are as numbered. Some other requirements: that the latest edition is not cumulative of the previous snapshots, i.e., files that appeared in older snapshots but which don't appear in later ones (e.g. due to refactoring etc.) should not appear in the latest edition of the code. meanwhile, there should be continuity between files that do persist between snapshots. I would like GIT to know that there are previous editions of these files and not treat them as brand new files within each edition. Some background about my aim: I need to formally revision control this work rather than keep local private snapshot copies. I plan to release this work as open source, so version controlling would be highly recommended I am evaluating some of the current popular version control systems (Subversion and GIT) BUT I definitely need a working solution in GIT as well as subversion. I'm not looking to be persuaded to use one particular tool, I need a solution for each tool I am considering. (I haved posted an answer separately for each tool so separate camps of folks who have expertise in GIT and Subversion will be able to give focused answers on one or the other). The same but separate question for Subversion: Script/tool to import series of snapshots, each being a new revision, into Subversion, populating source tree?

    Read the article

  • import csv file/excel into sql database asp.net

    - by kiev
    Hi everyone! I am starting a project with asp.net visual studio 2008 / SQL 2000 (2005 in future) using c#. The tricky part for me is that the existing DB schema changes often and the import files columns will all have to me matched up with the existing db schema since they may not be one to one match on column names. (There is a lookup table that provides the tables schema with column names I will use) I am exploring different ways to approach this, and need some expert advice. Is there any existing controls or frameworks that I can leverage to do any of this? So far I explored FileUpload .NET control, as well as some 3rd party upload controls to accomplish the upload such as SlickUpload but the files uploaded should be < 500mb Next part is reading of my csv /excel and parsing it for display to the user so they can match it with our db schema. I saw CSVReader and others but for excel its more difficult since I will need to support different versions. Essentially The user performing this import will insert and/or update several tables from this import file. There are other more advance requirements like record matching but and preview of the import records, but I wish to get through understanding how to do this first. Update: I ended up using csvReader with LumenWorks.Framework for uploading the csv files.

    Read the article

  • Oracle Data Pump import to a sql file error :ORA-31655 no data or metadata objects

    - by Francisco Quiñones
    Hello, I'm using Data Pump to export/import data, one requirement is to import data to a sql file. The OS is window. I made the follow export : expdp system/password directory=dpump_dir dumpfile=tablesdump.dmp content=DATA_ONLY tables=user.tablename and it works, I can see the file TABLESDUMP.DMP in the directory path. then when I tried to import it to a sql file: impdp system/password directory=dpump_dir dumpfile=tablesdump.dmp sqlfile=tables_export.sql the log show : ..... ORA-31655 no data or metadata objects selected for job ..... and the sql file is created empty in the directory path. I'm not DBA, I'm a Java developer , Can you help me? Thks

    Read the article

  • MYOB Service Sales Import

    - by sjw
    I have developed an export file from our Job Management system that I want to be able to import into MYOB Accounting Plus v18.5. The file is generated without issue and I have included every single field to make it easy for upload (i.e. Match All matches every field) The problem I am having is no matter what I do, I cannot get the sales to import... Every time, no matter what I do or how I create the customer card comes back with: Error -190: Customer not found. Sale invoice not imported. I have tried matching using - co./Last Name, Card ID & Record ID and every time I get the same error. I have created a single customer with a simply Co./Last Name, Card ID & Record ID and still, when I try to import using these same fields exactly matched, I get the same error...

    Read the article

  • SQL Server Import table keeping default values

    - by Chrissi
    I am importing a table from one database to another in SQL Server 2008 by right-clicking the target database and choosing Tasks Import Data... When I import the table I get the column names and types and all the data fine, but I lose the primary key, identity specifications and all the default values that were set in the source table. So now I have to set all the default values for each column again manually. Is there any way to get the default values with the import, or even after with a Query? I am VERY new to this and flailing in the dark, so forgive me if this is a really stupid question...

    Read the article

  • Flex Import Class from a Module within a sub directory

    - by Tom
    I put some modules in a module folder. How do I import classes with the import statement when I'm in a sub folder? This won't work, not like classes which are in packages. modules/SomeModule.mxml <?xml version="1.0"?> <mx:Module> <mx:Script> <![CDATA[ import Fruit.Apple; ]]> </mx:Script> </mx:Module> Directory: . |-- Fruit |-- Apple.as |-- Modules |-- SomeModule.mxml `-- application.mxml

    Read the article

  • MySQL import in phpmyadmin (CSV) chokes on quotes

    - by Andrew Swift
    I am trying to import a .csv file into a MySQL table via phpMyAdmin. The .csv file is separated by pipes, formated like this: data|d'ata|d'a"ta|dat"a| data|"da"ta|data|da't'a| dat'a|data|da"ta"|da'ta| The data contains quotes. I have no control over the format in which I recieve the data -- it is generated by a third party. The problem comes when there is a | followed by a double quote. I always get an "invalid field count in CSV input on line N" error. I am uploading the file from the import page, using Latin1, CSV, terminated by |, separated by ". I would like to just change the "enclosed by" character, but I keep getting "Invalid parameter for CSV import: Fields enclosed by". I have tried various characters with no success. How can I tell MySQL to accept this format in phpMyAdmin? Setting up these tables is the first step in writing a program that will use uploaded gzipped .csv files to maintain the catalog of an e-commerce site.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >