Search Results

Search found 23901 results on 957 pages for 'deployment process'.

Page 360/957 | < Previous Page | 356 357 358 359 360 361 362 363 364 365 366 367  | Next Page >

  • Some general C questions.

    - by b-gen-jack-o-neill
    Hello. I am trying to fully understand the process pro writing code in some language to execution by OS. In my case, the language would be C and the OS would be Windows. So far, I read many different articles, but I am not sure, whether I understand the process right, and I would like to ask you if you know some good articles on some subjects I couldn´t find. So, what I think I know about C (and basically other languages): C compiler itself handles only data types, basic math operations, pointers operations, and work with functions. By work with functions I mean how to pass argument to it, and how to get output from function. During compilation, function call is replaced by passing arguments to stack, and than if function is not inline, its call is replaced by some symbol for linker. Linker than find the function definition, and replace the symbol to jump adress to that function (and of course than jump back to program). If the above is generally true and I get it right, where to final .exe file actually linker saves the functions? After the main() function? And what creates the .exe header? Compiler or Linker? Now, additional capabilities of C, today known as C standart library is set of functions and the declarations of them, that other programmers wrote to extend and simplify use of C language. But these functions like printf() were (or could be?) written in different language, or assembler. And there comes my next question, can be, for example printf() function be written in pure C without use of assembler? I know this is quite big question, but I just mostly want to know, wheather I am right or not. And trust me, I read a lots of articles on the web, and I would not ask you, If I could find these infromation together on one place, in one article. Insted I must piece by piece gather informations, so I am not sure if I am right. Thanks.

    Read the article

  • Advise on VMWare hardware requirements and host OS

    - by edwin.nathaniel
    Hi All, I'm a newbie developer wanting to learn a bit about Virtualization (from the IT point of view, not theoretical/academic). What I'd like to do: Prepare a machine Install VMWare or VirtualBox Prepare 3 Guest OSes (one for Win2k8 server, 2 for Ubuntu Server) Win2k8 will run SQL Server 2k8 and IIS (for ASP.NET MVC deployment) 1 Ubuntu Server for Drupal, SugarCRM, MediaWiki, typical LAMP stuff 1 Ubuntu Server for Java (Tomcat/Jetty + MySQL/PostgreSQL) What I'd like to know: What would be the ideal Host OS such that the Host OS should not spend too many resources on itself but should boost these instances of VMs (e.g: does Win2k8 performs better vs Linux?) What would be the ideal machine for this (preferably AMD base chip) I'm not expecting the best performance out of this setup, just a decent one to host one drupal instance, one ASP.NET MVC (future, not now), and one Tomcat/Jetty instance. NB: If you have a better suggestions on the setup, feel free to let me know (e.g: maybe Drupal and Tomcat can be in one instance but move the database to another instance instead of 1 instance map to 1 webserver and 1 dbserver). Thank you.

    Read the article

  • There's a black hole in my server (TcpClient, TcpListener)

    - by Matías
    Hi, I'm trying to build a server that will receive files sent by clients over a network. If the client decides to send one file at a time, there's no problem, I get the file as I expected, but if it tries to send more than one I only get the first one. Here's the server code: I'm using one Thread per connected client public void ProcessClients() { while (IsListening) { ClientHandler clientHandler = new ClientHandler(listener.AcceptTcpClient()); Thread thread = new Thread(new ThreadStart(clientHandler.Process)); thread.Start(); } } The following code is part of ClientHandler class public void Process() { while (client.Connected) { using (MemoryStream memStream = new MemoryStream()) { int read; while ((read = client.GetStream().Read(buffer, 0, buffer.Length)) > 0) { memStream.Write(buffer, 0, read); } if (memStream.Length > 0) { Packet receivedPacket = (Packet)Tools.Deserialize(memStream.ToArray()); File.WriteAllBytes(Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.DesktopDirectory), Guid.NewGuid() + receivedPacket.Filename), receivedPacket.Content); } } } } On the first iteration I get the first file sent, but after it I don't get anything. I've tried using a Thread.Sleep(1000) at the end of every iteration without any luck. On the other side I have this code (for clients) . . client.Connect(); foreach (var oneFilename in fileList) client.Upload(oneFilename); client.Disconnect(); . . The method Upload: public void Upload(string filename) { FileInfo fileInfo = new FileInfo(filename); Packet packet = new Packet() { Filename = fileInfo.Name, Content = File.ReadAllBytes(filename) }; byte[] serializedPacket = Tools.Serialize(packet); netStream.Write(serializedPacket, 0, serializedPacket.Length); netStream.Flush(); } netStream (NetworkStream) is opened on Connect method, and closed on Disconnect. Where's the black hole? Can I send multiple objects as I'm trying to do? Thanks for your time.

    Read the article

  • Memory mapped files and "soft" page faults. Unavoidable?

    - by Robert Oschler
    I have two applications (processes) running under Windows XP that share data via a memory mapped file. Despite all my efforts to eliminate per iteration memory allocations, I still get about 10 soft page faults per data transfer. I've tried every flag there is in CreateFileMapping() and CreateFileView() and it still happens. I'm beginning to wonder if it's just the way memory mapped files work. If anyone there knows the O/S implementation details behind memory mapped files I would appreciate comments on the following theory: If two processes share a memory mapped file and one process writes to it while another reads it, then the O/S marks the pages written to as invalid. When the other process goes to read the memory areas that now belong to invalidated pages, this causes a soft page fault (by design) and the O/S knows to reload the invalidated page. Also, the number of soft page faults is therefore directly proportional to the size of the data write. My experiments seem to bear out the above theory. When I share data I write one contiguous block of data. In other words, the entire shared memory area is overwritten each time. If I make the block bigger the number of soft page faults goes up correspondingly. So, if my theory is true, there is nothing I can do to eliminate the soft page faults short of not using memory mapped files because that is how they work (using soft page faults to maintain page consistency). What is ironic is that I chose to use a memory mapped file instead of a TCP socket connection because I thought it would be more efficient. Note, if the soft page faults are harmless please note that. I've heard that at some point if the number is excessive, the system's performance can be marred. If soft page faults intrinsically are not significantly harmful then if anyone has any guidelines as to what number per second is "excessive" I'd like to hear that. Thanks.

    Read the article

  • Business Layer Pattern on Rails? MVCL

    - by Fabiano PS
    That is a broad question, and I appreciate no short/dumb asnwers like: "Oh that is the model job, this quest is retarded (period)" PROBLEM Where I work at people created a system over 2 years for managing the manufacture process over demand in the most simplified still broad as possible, involving selling, buying, assemble, The system is coded over Ruby On Rails. The result has been changed lots of times and the result is a mess on callbacks (some are called several times), 200+ models, and fat controllers: Total bad. The QUESTION is, if there is a gem, or pattern designed to handle Rails large app logic? The logic whould be able to fully talk to models (whose only concern would be data format handling and validation) What I EXPECT is to reduce complexity from various controllers, and hard to track callbacks into files with the responsibility to handle a business operation logic. In some cases there is the need to wait for a response, in others, only validation of the input is enough and a bg process would take place. ie: -- Sell some products (need to wait the operation to finish) 1. Set a View able to get the products input 2. Controller gets the product list inputed by employee and call the logic Logic::ExecuteWithResponse('sell', 'products', :prods => @product_list_with_qtt, :when => @date, :employee => current_user() ) This Logic would handle buying order, assemble order, machine schedule, warehouse reservation, and others

    Read the article

  • What is the relationship between Turing Machine & Modern Computer ? [closed]

    - by smwikipedia
    I heard a lot that modern computers are based on Turing machine. I just cannot build a bridge between a conceptual Turing Machine and a modern computer. Could someone help me build this bridge? Below is my current understanding. I think the computer is a big general-purpose Turing machine. Each program we write is a small specific-purpose Turing machine. The classical Turing machine do its job based on the input and its current state inside and so do our programs. Let's take a running program (a process) as an example. We know that in the process's address space, there's areas for stack, heap, and code. A classical Turing machine doesn't have the ability to remember many things, so we borrow the concept of stack from the push-down automaton. The heap and stack areas contains the state of our specific-purpose Turing machine (our program). The code area represents the logic of this small Turing machine. And various I/O devices supply input to this Turing machine.

    Read the article

  • Custom Database integration with MOSS 2007

    - by Bob
    Hopefully someone has been down this road before and can offer some sound advice as far as which direction I should take. I am currently involved in a project in which we will be utilizing a custom database to store data extracted from excel files based on pre-established templates (to maintain consistency). We currently have a process (written in C#.Net 2008) that can extract the necessary data from the spreadsheets and import it into our custom database. What I am primarily interested in is figuring out the best method for integrating that process with our portal. What I would like to do is let SharePoint keep track of the metadata about the spreadsheet itself and let the custom database keep track of the data contained within the spreadsheet. So, one thing I need is a way to link spreadsheets from SharePoint to the custom database and vice versa. As these spreadsheets will be updated periodically, I need tried and true way of ensuring that the data remains synchronized between SharePoint and the custom database. I am also interested in finding out how to use the data from the custom database to create reports within the SharePoint portal. Any and all information will be greatly appreciated.

    Read the article

  • Wizard form in Struts

    - by Kuntal Basu
    I am creating a wizard in Struts. It cotains 4 steps. For Each step I have separate ActionClass say:- Step1Action.java Step2Action.java Step3Action.java Step4Action.java and in each class there are 2 methods input() and process(). input() method is for showing the page in input mode process() method is will be use for processing the submitted data (if validation is ok) I am carrying all data upto the last step in a session. And saving all of them in database in the last step Similaly 4 action tags in struts.xml like :- <action name="step1" class="com.mycomp.myapp.action.Step1Action1" method="input"> <result name="success" type="redirectAction">step2</result> <result name="input">/view/step1.jsp</result> </action> <action name="step2" class="com.mycomp.myapp.action.Step1Action2" method="input"> <result name="success" type="redirectAction">step3</result> <result name="input">/view/step2.jsp</result> </action> But I think I am going wrong. Please Tell me How will I handle This case?

    Read the article

  • Choosing between cloud (Cloudfoundry ) and virtual servers - for developers

    - by Mike Z
    I just came across some articles on how to setup your own cloud using Cloudfoundry and Ubuntu, this got me thinking, choosing our infrastructure, if we want to use our own servers what's the advantage of cloud on virtual servers vs just using virtual servers, VPN? If we now develop for the cloud later if we need help we can quickly move on to a cloud provider, but other than that what's the advantage and disadvantage of private cloud in these areas? speed of development, testing, deployment server management security having an extra layer (cloud) will that have a hit on server performance, how big? any other advantage/disadvantage?

    Read the article

  • Flexible classroom environments (OS, Office)

    - by HannesFostie
    I work in the IT department of a training center, we still offer XP and Office 2003 trainings but also offer Vista and Win7 and Office 2007. Currently, we use VMs on VMware Server but this is obviously not a superb choice. We're thinking of implementing something like VDI (brainstorm phase, we hardly have any details) but I decided to check here if people would have some clever alternatives. Requirements: * Flexible when it comes to deployment * Centralized management would be a big plus * Allow for different software, whether they be compatible or not (all of office except for outlook can be installed simultaneously. for outlook you need to choose between 2003 or 2007) * Allow for different OS We have a big enough budget to implement a proper SAN environment to accomodate the virtualization of the solution, whatever kind it may be. A support contract will probably be necessary as well, because we need to be able to offer quick solutions to problems and with only 2 sysadmins that is simply impossible to guarantee.

    Read the article

  • Can I use RegFree Com with an application written in Excel VBA?

    - by Steven
    I have an application that is written in Excel VBA, myApp.xls. Currently we use InstallShield to distribute the application. Since we are moving to Windows Vista, I need to be able to install the application as a standard user. This does not allow for me to update the registry during the install process. In addition to the excel application we also have several VB6 applications. In order to install those applications, I was able to use RegFree com and Make My Manifest (MMM) as suggested by people on this forum (I greatly appreciate the insight btw!). This process, although a bit tedious, worked well. I then packaged the output from MMM in a VS '05 installer project and removed the UAC prompt on the msi using msiinfo.exe. Now I am faced with installing an application that basically lives in an Excel file. I modified a manifest that MMM created for me for one of my VB6 apps and tried to run the excel file through that, but I did not have much luck. Does anybody know of a way to do this? Does RegFree com work with VBA? Any thoughts or suggestions would be much appreciated. Thanks, Steve

    Read the article

  • segmentation fault on Unix - possible stack corruption

    - by bob
    hello, i'm looking at a core from a process running in Unix. Usually I can work my around and root into the backtrace to try identify a memory issue. In this case, I'm not sure how to proceed. Firstly the backtrace only gives 3 frames where I would expect alot more. For those frames, all the function parameters presented appears to completely invalid. There are not what I would expect. Some pointer parameters have the following associated with them - Cannot access memory at address Would this suggest some kind of complete stack corruption. I ran the process with libumem and all the buffers were reported as being clean. umem_status reported nothing either. so basically I'm stumped. What is the likely causes? What should I look for in code since libumem appears to have reported no errors. Any suggestions on how I can debug furhter? any extra features in mdb I should consider? thank you.

    Read the article

  • Configure custom SSL certificate for RDP on Windows Server 2012 in Remote Administration mode?

    - by Ryan Bolger
    So the release of Windows Server 2012 has removed a lot of the old Remote Desktop related configuration utilities. In particular, there is no more Remote Desktop Session Host Configuration utility that gave you access to the RDP-Tcp properties dialog that let you configure a custom certificate for the RDSH to use. In its place is a nice new consolidated GUI that is part of the overall "edit deployment properties" workflow in the new Server Manager. The catch is that you only get access to that workflow if you have the Remote Desktop Services role installed (as far as I can tell). This seems like a bit of an oversight on Microsoft's part. How can we configure a custom SSL certificate for RDP on Windows Server 2012 when it's running in the default Remote Administration mode without needlessly installing the Remote Desktop Services role?

    Read the article

  • when long polling, Why are my other requests taking so long?

    - by Pascal
    The client makes 2 concurrent requests. (1 which takes 60 seconds - long polling) and another which is NOT long polling - supposed to return right away. It does return right away when I'm not doing long polling. But as soon as I start doing long polling with the other thread, the other one takes forever to execute. Firebug shows that the request is waiting for 10-50 seconds. On the server, I profiled ALL requests from the moment the php script starts to the time it goes back to the client, and it shows that each one only took 300ms or less. This problem started about the same time I started doing long polling (with the other XHR requests). I'm using jquery for both requests. The server shows that it is under very light load. CPU and memory less then 2%. 8 processes running out of a pool of 15. (it doesn't seem to deviate much from that number 8, even when I run more ajax requests). I guess each process can run multiple ajax threads concurrently. I made sure to EXIT from all processes as soon as their done executing. I don't see how the process pool has run out, if there are still 7 unused processes listed under prstat -J. Also, the problem happens somewhat intermittently. Firefox should be able to handle 2 concurrent ajax requests. i dont get what the problem is.

    Read the article

  • Lua Alien Module - Trouble using WriteProcessMemory function, unsure on types (unit32)

    - by jefferysanders
    require "alien" --the address im trying to edit in the Mahjong game on Win7 local SCOREREF = 0x0744D554 --this should give me full access to the process local ACCESS = 0x001F0FFF --this is my process ID for my open window of Mahjong local PID = 1136 --function to open proc local op = alien.Kernel32.OpenProcess op:types{ ret = "pointer", abi = "stdcall"; "int", "int", "int"} --function to write to proc mem local wm = alien.Kernel32.WriteProcessMemory wm:types{ ret = "long", abi = "stdcall"; "pointer", "pointer", "pointer", "long", "pointer" } local pRef = op(ACCESS, true, PID) local buf = alien.buffer("99") -- ptr,uint32,byte arr (no idea what to make this),int, ptr print( wm( pRef, SCOREREF, buf, 4, nil)) --prints 1 if success, 0 if failed So that is my code. I am not even sure if I have the types set correctly. I am completely lost and need some guidance. I really wish there was more online help/documentation for alien, it confuses my poor brain. What utterly baffles me is that it WriteProcessMemory will sometimes complete successfully (though it does nothing at all, to my knowledge) and will also sometimes fail to complete successfully. As I've stated, my brain hurts. Any help appreciated.

    Read the article

  • ADFS 2.0 and WebEx

    - by DavisTasar
    We have a brand new deployment going on, where our University has purchased WebEx MeetingPlace. We have the Cisco CallManager component working, but the integration with Single Sign On with ADFS 2.0 has been nothing short of torture. The biggest problem I'm working with is that we use Split-Brain DNS, and our internal domain name versus external domain name is different. Trying to determine what credentials are getting passed back and forth, certificate errors for using the self-signed certificate, etc. Does anyone have any experience with this, or something similar? Do you have any tips, or watch-out-for-this, etc.? I've not worked with a Federated Authentication system before, and this scenario is very black-box-esque. Sorry, I'm also partially ranting as I'm frustrated.

    Read the article

  • Virtualbox PXE Boot Failing with a Windows Server 2008 R2 Server

    - by Vbitz
    Some fast help on this would be good, I have been on this problem for 14 hours. In a Virtualbox test environment I have 2 virtual machines networked together using a internal network (no traffic runs though the host, it is all at a software level). One is a fresh client with 512mb of ram and a dual core set-up, the other is the server with 1.5GB of ram and running server 2008 r2. The server is configured as a dns server, dchp server, domain controller and also serves PXE booting though WDS (Windows Deployment Services). Both machines can see each other and I am able to start a network boot. The issue comes at the second to last stage of the pre windows PE install. On TFTP download of boot.sdi it starts it but stops during the boot process.

    Read the article

  • Forking in PHP on Windows

    - by Doug Kavendek
    We are running PHP on a Windows server (a source of many problems indeed, but migrating is not an option currently). There are a few points where a user-initiated action will need to kick off a few things that take a while and about which the user doesn't need to know if they succeed or fail, such as sending off an email or making sure some third-party accounts are updated. If I could just fork with pcntl_fork(), this would be very simple, but the PCNTL functions are not available in Windows. It seems the closest I can get is to do something of this nature: exec( 'php-cgi.exe somescript.php' ); However, this would be far more complicated. The actions I need to kick off rely on a lot of context that already will exist in the running process; to use the above example, I'd need to figure out the essential data and supply it to the new script in some way. If I could fork, it'd just be a matter of letting the parent process return early, leaving the child to work on a few more things. I've found a few people talking about their own work in getting various PCNTL functions compiled on Windows, but none seemed to have anything available (broken links, etc). Despite this question having practically the same name as mine, it seems the problem was more execution timeout than needing to fork. So, is my best option to just refactor a bit to deal with calling php-cgi, or are there other options? Edit: It seems exec() won't work for this, at least not without me figuring some other aspect of it, as it waits until the call returns. I figured I could use START, sort of like exec( 'start php-cgi.exe somescript.php' );, but it still waits until the other script finishes.

    Read the article

  • Database INSERT does not take place

    - by reggie
    My code is as follows: <?php include("config.php"); $ip=$_SERVER['REMOTE_ADDR']; if($_POST['id']) { $id=$_POST['id']; $id = mysql_escape_String($id); $ip_sql=mysql_query("select ip_add from Voting_IP where mes_id_fk='$id' and ip_add='$ip'"); $count=mysql_num_rows($ip_sql); if($count==0) { $sql = "update Messages set up=up+1 where mes_id='$id'"; mysql_query($sql); $sql_in = "insert into Voting_IP (mes_id_fk,ip_add) values ('$id','$ip')"; mysql_query($sql_in) or die(mysql_error()); echo "<script>alert('Thanks for the vote');</script>"; } else { echo "<script>alert('You have already voted');</script>"; } $result=mysql_query("select up from Messages where mes_id='$id'"); $row=mysql_fetch_array($result); $up_value=$row['up']; echo "<img src='button.png' width='110' height='90'>"; echo $up_value; } ?> My problem is that the insert process does not take place at all. The script tags echos an alert box. Even the img tag is echoed to the web page. But the insert process does not take place. The config file is fine. Note: This code works on my local machine which has PHP 5.3 but it does not work on the server which has PHP 5.2. Any advice?

    Read the article

  • Ruby, Rails & MySQL parity between Mac Client (10.6) & XServe (10.5)

    - by Meltemi
    We're setting up a RoR setup with Development on Mac OS X Client (10.6.3) and then using a Mac OS X Server (10.5.8) for testing and eventually deployment. I'd like to get as many systems in sync on these machines as possible. Wondering if there are any pitfalls. I seem to understand what's necessary under Client but Server has some hardwired stuff that I want to make sure doesn't break...or is updated correctly. Currently installed on both machines we have: OS X Client (10.6.3): Ruby 1.8.7 Rails 2.3.5 MySQL (not installed yet) OS X Server (10.5.8): Ruby 1.8.6 Rails 2.3.5 MySQL Ver 14.12 Distrib 5.0.82 Any suggestions...Ideally from someone who's done this on Leopard Server as well but I'll listen to general tips & proceedures

    Read the article

  • Web Deploy to IIS7 fails with 401 (Unauthorized)

    - by Trex
    we have IIS7 running on Windows Web Server 2008 R2 and it's set up to support Web Deploy. It worked fine when we used the default Administrator account. We recently disabled this account (for security reasons) and are now trying to deploy using another account which is member of the Administrators group, but the deploy fails with 401 (Unauthorized). More specifically, it says: Connected to '<IP>' using Web Deployment Agent Service, but could not authorize. Make sure you are an admin on '<IP>'. The remote server returned an error: (401) Unauthorized. Anybody has any ideas why this is happening? Thanks. Trex

    Read the article

  • Fork or copy a users browser session in IE

    - by jmoeller
    Is it possible to fork a users session (or do something similar) in a Internet Explorer plugin? I want to process the page the user is on when they click a button in the toolbar. To avoid interrupting the users browsing, I'd like to "copy" everything so I can parse and process the page in the background. The processing can involve things such as loading the content of the result links on a Google search, if that's where the button was clicked. So - what I basically want is to imitate "Ctrl+N" but hide the window from the user, so they won't be interrupted. As you can see, if you fill out and submit the form on http://www.snee.com/xml/crud/posttest.html and press "Ctrl+N", everything posted will still appear in the new window, but it won't post the data twice. I was thinking of somehow copying the IWebBrowser2, but: I'm not sure if that's possible (I haven't been able to find any information on MSDN) I don't know if it copies the sessions as well. Creating a new instance of the IWebBrowser2 and simply navigating to the current URL isn't a valid solution as POST-variables of course doesn't get carried over.

    Read the article

  • NFSv3 Asynchronous Write Depends on Block Size?

    - by Joe Swanson
    I am trying to figure out if my NFSv3 deployment is performing SAFE asynchronous writes. I suspect that it is doing strictly synchronous writes, as I am getting poor performance in general. I used Wireshark to look at the 'stable' flag in write calls, and look for 'commit' calls. I noticed that, with especially large block sizes, writes to appear to be performed asynchronously: dd if=/dev/zero of=/proj/re3/0/zero bs=2097152 count=512 However, smaller block sizes appear to be performed strictly synchronously: dd if=/dev/zero of=/proj/re3/0/zero bs=8192 count=655360 What gives? How does the client decide whether to tell the server to perform writes synchronously or asynchronously? Is there any way I can get smaller block sizes to be performed asynchronously?

    Read the article

  • Locking issues with replacing files on a website

    - by Moe Sisko
    I want to replace existing files on an IIS website with updated versions. Say these files are large pdf documents, which can be accessed via hyperlinks. The site is up 24x7, so I'm concerned about locking issues when a file is being updated at exactly the same time that someone is trying to read the file. The files are updated using C# code run on the server. I can think of two options for opening the file for writing. Option 1) Open the file for writing, using FileShare.Read : using (FileStream stream = new FileStream(path, FileMode.Create, FileAccess.Write, FileShare.Read)) While this file is open, and a user requests the same file for reading in a web browser via a hyperlink, the document opens up as a blank page. Option 2) Open the file for writing using FileShare.None : using (FileStream stream = new FileStream(path, FileMode.Create, FileAccess.Write, FileShare.None)) While this file is open, and a user requests the same file for reading in a web browser via a hyperlink, the browser shows an error. In IE 8, you get HTTP 500, "The website cannot display the page", and in Firefox 3.5, you get : "The process cannot access the file because it is being used by another process." The browser behaviour kind of makes sense, and seem reasonable. I guess its highly unlikely that a user will attempt to read a file at exactly the same time you are updating it. It would be nice if somehow, the file update was atomic, like updating a database with SQL wrapped around a transaction. I'm wondering if you guys worry about this sort of thing, and prefer either of the above options, or even have other options of your own for updating files.

    Read the article

  • Creating nodes porgramatically in Drupal 6

    - by John
    Hey, I have been searching for how to create nodes in Drupal 6. I found some entries here on stackoverflow, but the questions seemed to either be for older versions or the solutions did not work for me. Ok, so here is my current process for trying to create $node = new stdClass(); $node->title = "test title"; $node->body = "test body"; $node->type= "story"; $node->created = time(); $node->changed = $node->created; $node->status = 1; $node->promote = 1; $node->sticky = 0; $node->format = 1; $node->uid = 1; node_save( $node ); When I execute this code, the node is created, but when I got the administration page, it throws the following errors: warning: Invalid argument supplied for foreach() in C:\wamp\www\steelylib\includes\menu.inc on line 258. warning: Invalid argument supplied for foreach() in C:\wamp\www\steelylib\includes\menu.inc on line 258. user warning: Duplicate entry '36' for key 1 query: INSERT INTO node_comment_statistics (nid, last_comment_timestamp, last_comment_name, last_comment_uid, comment_count) VALUES (36, 1269980590, NULL, 1, 0) in C:\wamp\www\steelylib\sites\all\modules\nodecomment\nodecomment.module on line 409. warning: Invalid argument supplied for foreach() in C:\wamp\www\steelylib\includes\menu.inc on line 258. warning: Invalid argument supplied for foreach() in C:\wamp\www\steelylib\includes\menu.inc on line 258. I've looked at different tutorials, and all seem to follow the same process. I'm not sure what I am doing wrong. I am using Drupal 6.15. When I roll back the database (to right before I made the changes) the errors are gone. Any help is appreciated!

    Read the article

< Previous Page | 356 357 358 359 360 361 362 363 364 365 366 367  | Next Page >